00:00:00.001 Started by upstream project "autotest-per-patch" build number 132858 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.049 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.072 Using shallow fetch with depth 1 00:00:00.072 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.072 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.103 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.154 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.154 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.148 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.158 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.169 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.169 > git config core.sparsecheckout # timeout=10 00:00:04.178 > git read-tree -mu HEAD # timeout=10 00:00:04.192 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.207 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.207 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.311 [Pipeline] Start of Pipeline 00:00:04.324 [Pipeline] library 00:00:04.326 Loading library shm_lib@master 00:00:04.326 Library shm_lib@master is cached. Copying from home. 00:00:04.342 [Pipeline] node 00:00:04.353 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.355 [Pipeline] { 00:00:04.363 [Pipeline] catchError 00:00:04.364 [Pipeline] { 00:00:04.375 [Pipeline] wrap 00:00:04.382 [Pipeline] { 00:00:04.390 [Pipeline] stage 00:00:04.391 [Pipeline] { (Prologue) 00:00:04.406 [Pipeline] echo 00:00:04.408 Node: VM-host-WFP7 00:00:04.415 [Pipeline] cleanWs 00:00:04.430 [WS-CLEANUP] Deleting project workspace... 00:00:04.430 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.437 [WS-CLEANUP] done 00:00:04.639 [Pipeline] setCustomBuildProperty 00:00:04.710 [Pipeline] httpRequest 00:00:05.043 [Pipeline] echo 00:00:05.044 Sorcerer 10.211.164.20 is alive 00:00:05.052 [Pipeline] retry 00:00:05.054 [Pipeline] { 00:00:05.063 [Pipeline] httpRequest 00:00:05.067 HttpMethod: GET 00:00:05.068 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.068 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.069 Response Code: HTTP/1.1 200 OK 00:00:05.070 Success: Status code 200 is in the accepted range: 200,404 00:00:05.070 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.707 [Pipeline] } 00:00:05.718 [Pipeline] // retry 00:00:05.723 [Pipeline] sh 00:00:06.005 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.019 [Pipeline] httpRequest 00:00:06.553 [Pipeline] echo 00:00:06.555 Sorcerer 10.211.164.20 is alive 00:00:06.563 [Pipeline] retry 00:00:06.564 [Pipeline] { 00:00:06.573 [Pipeline] httpRequest 00:00:06.577 HttpMethod: GET 00:00:06.578 URL: http://10.211.164.20/packages/spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:06.579 Sending request to url: http://10.211.164.20/packages/spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:06.591 Response Code: HTTP/1.1 200 OK 00:00:06.591 Success: Status code 200 is in the accepted range: 200,404 00:00:06.592 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:55.277 [Pipeline] } 00:00:55.293 [Pipeline] // retry 00:00:55.298 [Pipeline] sh 00:00:55.581 + tar --no-same-owner -xf spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:58.136 [Pipeline] sh 00:00:58.421 + git -C spdk log --oneline -n5 00:00:58.421 575641720 lib/trace:fix encoding format in trace_register_description 00:00:58.421 92d1e663a bdev/nvme: Fix depopulating a namespace twice 00:00:58.421 52a413487 bdev: do not retry nomem I/Os during aborting them 00:00:58.421 d13942918 bdev: simplify bdev_reset_freeze_channel 00:00:58.421 0edc184ec accel/mlx5: Support mkey registration 00:00:58.440 [Pipeline] writeFile 00:00:58.454 [Pipeline] sh 00:00:58.740 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:58.752 [Pipeline] sh 00:00:59.037 + cat autorun-spdk.conf 00:00:59.037 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.037 SPDK_RUN_ASAN=1 00:00:59.037 SPDK_RUN_UBSAN=1 00:00:59.037 SPDK_TEST_RAID=1 00:00:59.037 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:59.044 RUN_NIGHTLY=0 00:00:59.046 [Pipeline] } 00:00:59.059 [Pipeline] // stage 00:00:59.073 [Pipeline] stage 00:00:59.075 [Pipeline] { (Run VM) 00:00:59.087 [Pipeline] sh 00:00:59.371 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:59.371 + echo 'Start stage prepare_nvme.sh' 00:00:59.371 Start stage prepare_nvme.sh 00:00:59.371 + [[ -n 0 ]] 00:00:59.371 + disk_prefix=ex0 00:00:59.371 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:59.371 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:59.371 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:59.371 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.371 ++ SPDK_RUN_ASAN=1 00:00:59.371 ++ SPDK_RUN_UBSAN=1 00:00:59.371 ++ SPDK_TEST_RAID=1 00:00:59.371 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:59.371 ++ RUN_NIGHTLY=0 00:00:59.371 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:59.371 + nvme_files=() 00:00:59.371 + declare -A nvme_files 00:00:59.371 + backend_dir=/var/lib/libvirt/images/backends 00:00:59.371 + nvme_files['nvme.img']=5G 00:00:59.371 + nvme_files['nvme-cmb.img']=5G 00:00:59.371 + nvme_files['nvme-multi0.img']=4G 00:00:59.371 + nvme_files['nvme-multi1.img']=4G 00:00:59.371 + nvme_files['nvme-multi2.img']=4G 00:00:59.371 + nvme_files['nvme-openstack.img']=8G 00:00:59.371 + nvme_files['nvme-zns.img']=5G 00:00:59.371 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:59.371 + (( SPDK_TEST_FTL == 1 )) 00:00:59.371 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:59.371 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:59.371 + for nvme in "${!nvme_files[@]}" 00:00:59.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:59.371 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.371 + for nvme in "${!nvme_files[@]}" 00:00:59.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:59.371 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.371 + for nvme in "${!nvme_files[@]}" 00:00:59.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:59.371 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:59.371 + for nvme in "${!nvme_files[@]}" 00:00:59.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:59.371 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.371 + for nvme in "${!nvme_files[@]}" 00:00:59.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:59.371 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.371 + for nvme in "${!nvme_files[@]}" 00:00:59.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:59.371 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.371 + for nvme in "${!nvme_files[@]}" 00:00:59.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:59.631 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.631 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:59.631 + echo 'End stage prepare_nvme.sh' 00:00:59.631 End stage prepare_nvme.sh 00:00:59.643 [Pipeline] sh 00:00:59.927 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:59.927 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:00:59.927 00:00:59.927 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:59.927 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:59.927 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:59.927 HELP=0 00:00:59.927 DRY_RUN=0 00:00:59.927 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:59.927 NVME_DISKS_TYPE=nvme,nvme, 00:00:59.927 NVME_AUTO_CREATE=0 00:00:59.927 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:59.927 NVME_CMB=,, 00:00:59.927 NVME_PMR=,, 00:00:59.927 NVME_ZNS=,, 00:00:59.927 NVME_MS=,, 00:00:59.927 NVME_FDP=,, 00:00:59.927 SPDK_VAGRANT_DISTRO=fedora39 00:00:59.927 SPDK_VAGRANT_VMCPU=10 00:00:59.927 SPDK_VAGRANT_VMRAM=12288 00:00:59.927 SPDK_VAGRANT_PROVIDER=libvirt 00:00:59.927 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:59.927 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:59.927 SPDK_OPENSTACK_NETWORK=0 00:00:59.927 VAGRANT_PACKAGE_BOX=0 00:00:59.927 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:59.927 FORCE_DISTRO=true 00:00:59.927 VAGRANT_BOX_VERSION= 00:00:59.927 EXTRA_VAGRANTFILES= 00:00:59.927 NIC_MODEL=virtio 00:00:59.927 00:00:59.927 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:59.927 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:01.837 Bringing machine 'default' up with 'libvirt' provider... 00:01:02.406 ==> default: Creating image (snapshot of base box volume). 00:01:02.406 ==> default: Creating domain with the following settings... 00:01:02.406 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734077534_e3bd7f30f9f2bb2590d1 00:01:02.406 ==> default: -- Domain type: kvm 00:01:02.407 ==> default: -- Cpus: 10 00:01:02.407 ==> default: -- Feature: acpi 00:01:02.407 ==> default: -- Feature: apic 00:01:02.407 ==> default: -- Feature: pae 00:01:02.407 ==> default: -- Memory: 12288M 00:01:02.407 ==> default: -- Memory Backing: hugepages: 00:01:02.407 ==> default: -- Management MAC: 00:01:02.407 ==> default: -- Loader: 00:01:02.407 ==> default: -- Nvram: 00:01:02.407 ==> default: -- Base box: spdk/fedora39 00:01:02.407 ==> default: -- Storage pool: default 00:01:02.407 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734077534_e3bd7f30f9f2bb2590d1.img (20G) 00:01:02.407 ==> default: -- Volume Cache: default 00:01:02.407 ==> default: -- Kernel: 00:01:02.407 ==> default: -- Initrd: 00:01:02.407 ==> default: -- Graphics Type: vnc 00:01:02.407 ==> default: -- Graphics Port: -1 00:01:02.407 ==> default: -- Graphics IP: 127.0.0.1 00:01:02.407 ==> default: -- Graphics Password: Not defined 00:01:02.407 ==> default: -- Video Type: cirrus 00:01:02.407 ==> default: -- Video VRAM: 9216 00:01:02.407 ==> default: -- Sound Type: 00:01:02.407 ==> default: -- Keymap: en-us 00:01:02.407 ==> default: -- TPM Path: 00:01:02.407 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:02.407 ==> default: -- Command line args: 00:01:02.407 ==> default: -> value=-device, 00:01:02.407 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:02.407 ==> default: -> value=-drive, 00:01:02.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:02.407 ==> default: -> value=-device, 00:01:02.407 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.407 ==> default: -> value=-device, 00:01:02.407 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:02.407 ==> default: -> value=-drive, 00:01:02.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:02.407 ==> default: -> value=-device, 00:01:02.407 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.407 ==> default: -> value=-drive, 00:01:02.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:02.407 ==> default: -> value=-device, 00:01:02.407 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.407 ==> default: -> value=-drive, 00:01:02.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:02.407 ==> default: -> value=-device, 00:01:02.407 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.667 ==> default: Creating shared folders metadata... 00:01:02.667 ==> default: Starting domain. 00:01:04.053 ==> default: Waiting for domain to get an IP address... 00:01:22.177 ==> default: Waiting for SSH to become available... 00:01:22.177 ==> default: Configuring and enabling network interfaces... 00:01:27.455 default: SSH address: 192.168.121.242:22 00:01:27.455 default: SSH username: vagrant 00:01:27.455 default: SSH auth method: private key 00:01:29.994 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.120 ==> default: Mounting SSHFS shared folder... 00:01:40.045 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:40.045 ==> default: Checking Mount.. 00:01:41.952 ==> default: Folder Successfully Mounted! 00:01:41.952 ==> default: Running provisioner: file... 00:01:42.887 default: ~/.gitconfig => .gitconfig 00:01:43.454 00:01:43.454 SUCCESS! 00:01:43.454 00:01:43.454 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:43.454 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.454 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:43.454 00:01:43.463 [Pipeline] } 00:01:43.476 [Pipeline] // stage 00:01:43.484 [Pipeline] dir 00:01:43.485 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:43.485 [Pipeline] { 00:01:43.496 [Pipeline] catchError 00:01:43.497 [Pipeline] { 00:01:43.506 [Pipeline] sh 00:01:43.788 + + vagrant ssh-config --host vagrant 00:01:43.788 sed+ -ne /^Host/,$p 00:01:43.788 tee ssh_conf 00:01:46.327 Host vagrant 00:01:46.327 HostName 192.168.121.242 00:01:46.327 User vagrant 00:01:46.327 Port 22 00:01:46.327 UserKnownHostsFile /dev/null 00:01:46.327 StrictHostKeyChecking no 00:01:46.327 PasswordAuthentication no 00:01:46.327 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:46.327 IdentitiesOnly yes 00:01:46.327 LogLevel FATAL 00:01:46.327 ForwardAgent yes 00:01:46.327 ForwardX11 yes 00:01:46.327 00:01:46.342 [Pipeline] withEnv 00:01:46.344 [Pipeline] { 00:01:46.359 [Pipeline] sh 00:01:46.647 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:46.647 source /etc/os-release 00:01:46.647 [[ -e /image.version ]] && img=$(< /image.version) 00:01:46.647 # Minimal, systemd-like check. 00:01:46.647 if [[ -e /.dockerenv ]]; then 00:01:46.647 # Clear garbage from the node's name: 00:01:46.647 # agt-er_autotest_547-896 -> autotest_547-896 00:01:46.647 # $HOSTNAME is the actual container id 00:01:46.647 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:46.647 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:46.647 # We can assume this is a mount from a host where container is running, 00:01:46.647 # so fetch its hostname to easily identify the target swarm worker. 00:01:46.647 container="$(< /etc/hostname) ($agent)" 00:01:46.647 else 00:01:46.647 # Fallback 00:01:46.647 container=$agent 00:01:46.647 fi 00:01:46.647 fi 00:01:46.647 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:46.647 00:01:46.921 [Pipeline] } 00:01:46.936 [Pipeline] // withEnv 00:01:46.944 [Pipeline] setCustomBuildProperty 00:01:46.958 [Pipeline] stage 00:01:46.960 [Pipeline] { (Tests) 00:01:46.976 [Pipeline] sh 00:01:47.258 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.607 [Pipeline] sh 00:01:47.891 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:48.165 [Pipeline] timeout 00:01:48.165 Timeout set to expire in 1 hr 30 min 00:01:48.167 [Pipeline] { 00:01:48.181 [Pipeline] sh 00:01:48.466 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:49.037 HEAD is now at 575641720 lib/trace:fix encoding format in trace_register_description 00:01:49.049 [Pipeline] sh 00:01:49.333 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:49.607 [Pipeline] sh 00:01:49.892 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:50.168 [Pipeline] sh 00:01:50.453 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:50.713 ++ readlink -f spdk_repo 00:01:50.713 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:50.713 + [[ -n /home/vagrant/spdk_repo ]] 00:01:50.713 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:50.713 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:50.713 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:50.713 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:50.713 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:50.713 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:50.713 + cd /home/vagrant/spdk_repo 00:01:50.713 + source /etc/os-release 00:01:50.713 ++ NAME='Fedora Linux' 00:01:50.713 ++ VERSION='39 (Cloud Edition)' 00:01:50.713 ++ ID=fedora 00:01:50.713 ++ VERSION_ID=39 00:01:50.713 ++ VERSION_CODENAME= 00:01:50.713 ++ PLATFORM_ID=platform:f39 00:01:50.713 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:50.713 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.713 ++ LOGO=fedora-logo-icon 00:01:50.713 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:50.713 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.713 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:50.713 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.713 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.713 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.713 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:50.713 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.713 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:50.713 ++ SUPPORT_END=2024-11-12 00:01:50.713 ++ VARIANT='Cloud Edition' 00:01:50.713 ++ VARIANT_ID=cloud 00:01:50.713 + uname -a 00:01:50.713 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:50.713 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:51.283 Hugepages 00:01:51.283 node hugesize free / total 00:01:51.283 node0 1048576kB 0 / 0 00:01:51.283 node0 2048kB 0 / 0 00:01:51.283 00:01:51.283 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.283 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:51.283 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:51.283 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:51.283 + rm -f /tmp/spdk-ld-path 00:01:51.283 + source autorun-spdk.conf 00:01:51.283 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.283 ++ SPDK_RUN_ASAN=1 00:01:51.283 ++ SPDK_RUN_UBSAN=1 00:01:51.283 ++ SPDK_TEST_RAID=1 00:01:51.283 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.283 ++ RUN_NIGHTLY=0 00:01:51.283 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.283 + [[ -n '' ]] 00:01:51.283 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:51.543 + for M in /var/spdk/build-*-manifest.txt 00:01:51.543 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:51.543 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.543 + for M in /var/spdk/build-*-manifest.txt 00:01:51.543 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.543 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.543 + for M in /var/spdk/build-*-manifest.txt 00:01:51.543 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.543 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.543 ++ uname 00:01:51.543 + [[ Linux == \L\i\n\u\x ]] 00:01:51.543 + sudo dmesg -T 00:01:51.543 + sudo dmesg --clear 00:01:51.543 + dmesg_pid=5433 00:01:51.543 + sudo dmesg -Tw 00:01:51.543 + [[ Fedora Linux == FreeBSD ]] 00:01:51.543 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.543 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.543 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.543 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.543 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.543 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.543 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.543 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.543 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.543 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.543 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.543 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.543 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.543 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.543 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.543 08:13:03 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:51.544 08:13:03 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.544 08:13:03 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.544 08:13:03 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:51.544 08:13:03 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:51.544 08:13:03 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:51.544 08:13:03 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.544 08:13:03 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:51.544 08:13:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:51.544 08:13:03 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.804 08:13:03 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:51.804 08:13:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:51.804 08:13:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:51.804 08:13:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.804 08:13:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.804 08:13:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.804 08:13:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.804 08:13:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.804 08:13:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.804 08:13:03 -- paths/export.sh@5 -- $ export PATH 00:01:51.804 08:13:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.804 08:13:03 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:51.804 08:13:03 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:51.804 08:13:03 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734077583.XXXXXX 00:01:51.804 08:13:03 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734077583.Z8uOzi 00:01:51.804 08:13:03 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:51.804 08:13:03 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:51.804 08:13:03 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:51.804 08:13:03 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:51.804 08:13:03 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.804 08:13:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:51.804 08:13:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:51.804 08:13:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.804 08:13:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:51.804 08:13:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:51.804 08:13:04 -- pm/common@17 -- $ local monitor 00:01:51.804 08:13:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.804 08:13:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.804 08:13:04 -- pm/common@25 -- $ sleep 1 00:01:51.804 08:13:04 -- pm/common@21 -- $ date +%s 00:01:51.804 08:13:04 -- pm/common@21 -- $ date +%s 00:01:51.805 08:13:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734077584 00:01:51.805 08:13:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734077584 00:01:51.805 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734077584_collect-cpu-load.pm.log 00:01:51.805 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734077584_collect-vmstat.pm.log 00:01:52.746 08:13:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:52.746 08:13:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.746 08:13:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.746 08:13:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.746 08:13:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.746 Fri Dec 13 08:13:05 AM UTC 2024 00:01:52.746 08:13:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.746 v25.01-pre-326-g575641720 00:01:52.746 08:13:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:52.746 08:13:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:52.746 08:13:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.746 08:13:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.746 08:13:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.746 ************************************ 00:01:52.746 START TEST asan 00:01:52.746 ************************************ 00:01:52.746 using asan 00:01:52.746 08:13:05 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:52.746 00:01:52.746 real 0m0.001s 00:01:52.746 user 0m0.000s 00:01:52.746 sys 0m0.000s 00:01:52.746 08:13:05 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:52.746 08:13:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.746 ************************************ 00:01:52.746 END TEST asan 00:01:52.746 ************************************ 00:01:53.005 08:13:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:53.005 08:13:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:53.005 08:13:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.005 08:13:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.005 08:13:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.005 ************************************ 00:01:53.005 START TEST ubsan 00:01:53.005 ************************************ 00:01:53.005 using ubsan 00:01:53.005 08:13:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:53.005 00:01:53.005 real 0m0.000s 00:01:53.005 user 0m0.000s 00:01:53.006 sys 0m0.000s 00:01:53.006 08:13:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:53.006 08:13:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.006 ************************************ 00:01:53.006 END TEST ubsan 00:01:53.006 ************************************ 00:01:53.006 08:13:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:53.006 08:13:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:53.006 08:13:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:53.006 08:13:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:53.006 08:13:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:53.006 08:13:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:53.006 08:13:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:53.006 08:13:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:53.006 08:13:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:53.006 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:53.006 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:53.574 Using 'verbs' RDMA provider 00:02:09.897 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:28.002 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:28.002 Creating mk/config.mk...done. 00:02:28.002 Creating mk/cc.flags.mk...done. 00:02:28.002 Type 'make' to build. 00:02:28.002 08:13:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:28.002 08:13:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:28.002 08:13:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:28.002 08:13:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.002 ************************************ 00:02:28.002 START TEST make 00:02:28.002 ************************************ 00:02:28.002 08:13:38 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:28.002 make[1]: Nothing to be done for 'all'. 00:02:37.986 The Meson build system 00:02:37.986 Version: 1.5.0 00:02:37.986 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:37.986 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:37.986 Build type: native build 00:02:37.986 Program cat found: YES (/usr/bin/cat) 00:02:37.986 Project name: DPDK 00:02:37.986 Project version: 24.03.0 00:02:37.986 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.986 C linker for the host machine: cc ld.bfd 2.40-14 00:02:37.986 Host machine cpu family: x86_64 00:02:37.986 Host machine cpu: x86_64 00:02:37.986 Message: ## Building in Developer Mode ## 00:02:37.986 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.986 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:37.986 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.986 Program python3 found: YES (/usr/bin/python3) 00:02:37.986 Program cat found: YES (/usr/bin/cat) 00:02:37.986 Compiler for C supports arguments -march=native: YES 00:02:37.986 Checking for size of "void *" : 8 00:02:37.986 Checking for size of "void *" : 8 (cached) 00:02:37.986 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:37.986 Library m found: YES 00:02:37.986 Library numa found: YES 00:02:37.986 Has header "numaif.h" : YES 00:02:37.986 Library fdt found: NO 00:02:37.986 Library execinfo found: NO 00:02:37.986 Has header "execinfo.h" : YES 00:02:37.986 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.986 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.986 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.986 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.986 Run-time dependency openssl found: YES 3.1.1 00:02:37.986 Run-time dependency libpcap found: YES 1.10.4 00:02:37.986 Has header "pcap.h" with dependency libpcap: YES 00:02:37.986 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.986 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.986 Compiler for C supports arguments -Wformat: YES 00:02:37.986 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.986 Compiler for C supports arguments -Wformat-security: NO 00:02:37.986 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.986 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.986 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.986 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.986 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.986 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.986 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.986 Compiler for C supports arguments -Wundef: YES 00:02:37.986 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.986 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.986 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.986 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.986 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.986 Program objdump found: YES (/usr/bin/objdump) 00:02:37.986 Compiler for C supports arguments -mavx512f: YES 00:02:37.986 Checking if "AVX512 checking" compiles: YES 00:02:37.986 Fetching value of define "__SSE4_2__" : 1 00:02:37.986 Fetching value of define "__AES__" : 1 00:02:37.986 Fetching value of define "__AVX__" : 1 00:02:37.986 Fetching value of define "__AVX2__" : 1 00:02:37.986 Fetching value of define "__AVX512BW__" : 1 00:02:37.986 Fetching value of define "__AVX512CD__" : 1 00:02:37.986 Fetching value of define "__AVX512DQ__" : 1 00:02:37.986 Fetching value of define "__AVX512F__" : 1 00:02:37.986 Fetching value of define "__AVX512VL__" : 1 00:02:37.986 Fetching value of define "__PCLMUL__" : 1 00:02:37.986 Fetching value of define "__RDRND__" : 1 00:02:37.986 Fetching value of define "__RDSEED__" : 1 00:02:37.986 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.986 Fetching value of define "__znver1__" : (undefined) 00:02:37.986 Fetching value of define "__znver2__" : (undefined) 00:02:37.986 Fetching value of define "__znver3__" : (undefined) 00:02:37.986 Fetching value of define "__znver4__" : (undefined) 00:02:37.986 Library asan found: YES 00:02:37.986 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.986 Message: lib/log: Defining dependency "log" 00:02:37.986 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.986 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.986 Library rt found: YES 00:02:37.986 Checking for function "getentropy" : NO 00:02:37.986 Message: lib/eal: Defining dependency "eal" 00:02:37.986 Message: lib/ring: Defining dependency "ring" 00:02:37.986 Message: lib/rcu: Defining dependency "rcu" 00:02:37.986 Message: lib/mempool: Defining dependency "mempool" 00:02:37.986 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.986 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.986 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.986 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.986 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.986 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.986 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.986 Compiler for C supports arguments -mpclmul: YES 00:02:37.986 Compiler for C supports arguments -maes: YES 00:02:37.986 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.986 Compiler for C supports arguments -mavx512bw: YES 00:02:37.986 Compiler for C supports arguments -mavx512dq: YES 00:02:37.986 Compiler for C supports arguments -mavx512vl: YES 00:02:37.986 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.986 Compiler for C supports arguments -mavx2: YES 00:02:37.986 Compiler for C supports arguments -mavx: YES 00:02:37.986 Message: lib/net: Defining dependency "net" 00:02:37.986 Message: lib/meter: Defining dependency "meter" 00:02:37.986 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.986 Message: lib/pci: Defining dependency "pci" 00:02:37.986 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.986 Message: lib/hash: Defining dependency "hash" 00:02:37.986 Message: lib/timer: Defining dependency "timer" 00:02:37.986 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.986 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.986 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.986 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.986 Message: lib/power: Defining dependency "power" 00:02:37.986 Message: lib/reorder: Defining dependency "reorder" 00:02:37.986 Message: lib/security: Defining dependency "security" 00:02:37.986 Has header "linux/userfaultfd.h" : YES 00:02:37.986 Has header "linux/vduse.h" : YES 00:02:37.986 Message: lib/vhost: Defining dependency "vhost" 00:02:37.986 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.986 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.986 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.986 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.986 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:37.986 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:37.986 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:37.986 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:37.986 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:37.986 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:37.986 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:37.986 Configuring doxy-api-html.conf using configuration 00:02:37.986 Configuring doxy-api-man.conf using configuration 00:02:37.986 Program mandb found: YES (/usr/bin/mandb) 00:02:37.986 Program sphinx-build found: NO 00:02:37.986 Configuring rte_build_config.h using configuration 00:02:37.986 Message: 00:02:37.986 ================= 00:02:37.986 Applications Enabled 00:02:37.986 ================= 00:02:37.986 00:02:37.986 apps: 00:02:37.986 00:02:37.986 00:02:37.986 Message: 00:02:37.986 ================= 00:02:37.986 Libraries Enabled 00:02:37.986 ================= 00:02:37.986 00:02:37.986 libs: 00:02:37.986 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:37.986 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:37.986 cryptodev, dmadev, power, reorder, security, vhost, 00:02:37.986 00:02:37.986 Message: 00:02:37.986 =============== 00:02:37.986 Drivers Enabled 00:02:37.986 =============== 00:02:37.986 00:02:37.987 common: 00:02:37.987 00:02:37.987 bus: 00:02:37.987 pci, vdev, 00:02:37.987 mempool: 00:02:37.987 ring, 00:02:37.987 dma: 00:02:37.987 00:02:37.987 net: 00:02:37.987 00:02:37.987 crypto: 00:02:37.987 00:02:37.987 compress: 00:02:37.987 00:02:37.987 vdpa: 00:02:37.987 00:02:37.987 00:02:37.987 Message: 00:02:37.987 ================= 00:02:37.987 Content Skipped 00:02:37.987 ================= 00:02:37.987 00:02:37.987 apps: 00:02:37.987 dumpcap: explicitly disabled via build config 00:02:37.987 graph: explicitly disabled via build config 00:02:37.987 pdump: explicitly disabled via build config 00:02:37.987 proc-info: explicitly disabled via build config 00:02:37.987 test-acl: explicitly disabled via build config 00:02:37.987 test-bbdev: explicitly disabled via build config 00:02:37.987 test-cmdline: explicitly disabled via build config 00:02:37.987 test-compress-perf: explicitly disabled via build config 00:02:37.987 test-crypto-perf: explicitly disabled via build config 00:02:37.987 test-dma-perf: explicitly disabled via build config 00:02:37.987 test-eventdev: explicitly disabled via build config 00:02:37.987 test-fib: explicitly disabled via build config 00:02:37.987 test-flow-perf: explicitly disabled via build config 00:02:37.987 test-gpudev: explicitly disabled via build config 00:02:37.987 test-mldev: explicitly disabled via build config 00:02:37.987 test-pipeline: explicitly disabled via build config 00:02:37.987 test-pmd: explicitly disabled via build config 00:02:37.987 test-regex: explicitly disabled via build config 00:02:37.987 test-sad: explicitly disabled via build config 00:02:37.987 test-security-perf: explicitly disabled via build config 00:02:37.987 00:02:37.987 libs: 00:02:37.987 argparse: explicitly disabled via build config 00:02:37.987 metrics: explicitly disabled via build config 00:02:37.987 acl: explicitly disabled via build config 00:02:37.987 bbdev: explicitly disabled via build config 00:02:37.987 bitratestats: explicitly disabled via build config 00:02:37.987 bpf: explicitly disabled via build config 00:02:37.987 cfgfile: explicitly disabled via build config 00:02:37.987 distributor: explicitly disabled via build config 00:02:37.987 efd: explicitly disabled via build config 00:02:37.987 eventdev: explicitly disabled via build config 00:02:37.987 dispatcher: explicitly disabled via build config 00:02:37.987 gpudev: explicitly disabled via build config 00:02:37.987 gro: explicitly disabled via build config 00:02:37.987 gso: explicitly disabled via build config 00:02:37.987 ip_frag: explicitly disabled via build config 00:02:37.987 jobstats: explicitly disabled via build config 00:02:37.987 latencystats: explicitly disabled via build config 00:02:37.987 lpm: explicitly disabled via build config 00:02:37.987 member: explicitly disabled via build config 00:02:37.987 pcapng: explicitly disabled via build config 00:02:37.987 rawdev: explicitly disabled via build config 00:02:37.987 regexdev: explicitly disabled via build config 00:02:37.987 mldev: explicitly disabled via build config 00:02:37.987 rib: explicitly disabled via build config 00:02:37.987 sched: explicitly disabled via build config 00:02:37.987 stack: explicitly disabled via build config 00:02:37.987 ipsec: explicitly disabled via build config 00:02:37.987 pdcp: explicitly disabled via build config 00:02:37.987 fib: explicitly disabled via build config 00:02:37.987 port: explicitly disabled via build config 00:02:37.987 pdump: explicitly disabled via build config 00:02:37.987 table: explicitly disabled via build config 00:02:37.987 pipeline: explicitly disabled via build config 00:02:37.987 graph: explicitly disabled via build config 00:02:37.987 node: explicitly disabled via build config 00:02:37.987 00:02:37.987 drivers: 00:02:37.987 common/cpt: not in enabled drivers build config 00:02:37.987 common/dpaax: not in enabled drivers build config 00:02:37.987 common/iavf: not in enabled drivers build config 00:02:37.987 common/idpf: not in enabled drivers build config 00:02:37.987 common/ionic: not in enabled drivers build config 00:02:37.987 common/mvep: not in enabled drivers build config 00:02:37.987 common/octeontx: not in enabled drivers build config 00:02:37.987 bus/auxiliary: not in enabled drivers build config 00:02:37.987 bus/cdx: not in enabled drivers build config 00:02:37.987 bus/dpaa: not in enabled drivers build config 00:02:37.987 bus/fslmc: not in enabled drivers build config 00:02:37.987 bus/ifpga: not in enabled drivers build config 00:02:37.987 bus/platform: not in enabled drivers build config 00:02:37.987 bus/uacce: not in enabled drivers build config 00:02:37.987 bus/vmbus: not in enabled drivers build config 00:02:37.987 common/cnxk: not in enabled drivers build config 00:02:37.987 common/mlx5: not in enabled drivers build config 00:02:37.987 common/nfp: not in enabled drivers build config 00:02:37.987 common/nitrox: not in enabled drivers build config 00:02:37.987 common/qat: not in enabled drivers build config 00:02:37.987 common/sfc_efx: not in enabled drivers build config 00:02:37.987 mempool/bucket: not in enabled drivers build config 00:02:37.987 mempool/cnxk: not in enabled drivers build config 00:02:37.987 mempool/dpaa: not in enabled drivers build config 00:02:37.987 mempool/dpaa2: not in enabled drivers build config 00:02:37.987 mempool/octeontx: not in enabled drivers build config 00:02:37.987 mempool/stack: not in enabled drivers build config 00:02:37.987 dma/cnxk: not in enabled drivers build config 00:02:37.987 dma/dpaa: not in enabled drivers build config 00:02:37.987 dma/dpaa2: not in enabled drivers build config 00:02:37.987 dma/hisilicon: not in enabled drivers build config 00:02:37.987 dma/idxd: not in enabled drivers build config 00:02:37.987 dma/ioat: not in enabled drivers build config 00:02:37.987 dma/skeleton: not in enabled drivers build config 00:02:37.987 net/af_packet: not in enabled drivers build config 00:02:37.987 net/af_xdp: not in enabled drivers build config 00:02:37.987 net/ark: not in enabled drivers build config 00:02:37.987 net/atlantic: not in enabled drivers build config 00:02:37.987 net/avp: not in enabled drivers build config 00:02:37.987 net/axgbe: not in enabled drivers build config 00:02:37.987 net/bnx2x: not in enabled drivers build config 00:02:37.987 net/bnxt: not in enabled drivers build config 00:02:37.987 net/bonding: not in enabled drivers build config 00:02:37.987 net/cnxk: not in enabled drivers build config 00:02:37.987 net/cpfl: not in enabled drivers build config 00:02:37.987 net/cxgbe: not in enabled drivers build config 00:02:37.987 net/dpaa: not in enabled drivers build config 00:02:37.987 net/dpaa2: not in enabled drivers build config 00:02:37.987 net/e1000: not in enabled drivers build config 00:02:37.987 net/ena: not in enabled drivers build config 00:02:37.987 net/enetc: not in enabled drivers build config 00:02:37.987 net/enetfec: not in enabled drivers build config 00:02:37.987 net/enic: not in enabled drivers build config 00:02:37.987 net/failsafe: not in enabled drivers build config 00:02:37.987 net/fm10k: not in enabled drivers build config 00:02:37.987 net/gve: not in enabled drivers build config 00:02:37.987 net/hinic: not in enabled drivers build config 00:02:37.987 net/hns3: not in enabled drivers build config 00:02:37.987 net/i40e: not in enabled drivers build config 00:02:37.987 net/iavf: not in enabled drivers build config 00:02:37.987 net/ice: not in enabled drivers build config 00:02:37.987 net/idpf: not in enabled drivers build config 00:02:37.987 net/igc: not in enabled drivers build config 00:02:37.987 net/ionic: not in enabled drivers build config 00:02:37.987 net/ipn3ke: not in enabled drivers build config 00:02:37.987 net/ixgbe: not in enabled drivers build config 00:02:37.987 net/mana: not in enabled drivers build config 00:02:37.987 net/memif: not in enabled drivers build config 00:02:37.987 net/mlx4: not in enabled drivers build config 00:02:37.987 net/mlx5: not in enabled drivers build config 00:02:37.987 net/mvneta: not in enabled drivers build config 00:02:37.987 net/mvpp2: not in enabled drivers build config 00:02:37.987 net/netvsc: not in enabled drivers build config 00:02:37.987 net/nfb: not in enabled drivers build config 00:02:37.987 net/nfp: not in enabled drivers build config 00:02:37.987 net/ngbe: not in enabled drivers build config 00:02:37.987 net/null: not in enabled drivers build config 00:02:37.987 net/octeontx: not in enabled drivers build config 00:02:37.987 net/octeon_ep: not in enabled drivers build config 00:02:37.987 net/pcap: not in enabled drivers build config 00:02:37.987 net/pfe: not in enabled drivers build config 00:02:37.987 net/qede: not in enabled drivers build config 00:02:37.987 net/ring: not in enabled drivers build config 00:02:37.987 net/sfc: not in enabled drivers build config 00:02:37.987 net/softnic: not in enabled drivers build config 00:02:37.987 net/tap: not in enabled drivers build config 00:02:37.987 net/thunderx: not in enabled drivers build config 00:02:37.987 net/txgbe: not in enabled drivers build config 00:02:37.987 net/vdev_netvsc: not in enabled drivers build config 00:02:37.987 net/vhost: not in enabled drivers build config 00:02:37.987 net/virtio: not in enabled drivers build config 00:02:37.987 net/vmxnet3: not in enabled drivers build config 00:02:37.987 raw/*: missing internal dependency, "rawdev" 00:02:37.987 crypto/armv8: not in enabled drivers build config 00:02:37.987 crypto/bcmfs: not in enabled drivers build config 00:02:37.987 crypto/caam_jr: not in enabled drivers build config 00:02:37.987 crypto/ccp: not in enabled drivers build config 00:02:37.987 crypto/cnxk: not in enabled drivers build config 00:02:37.987 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.987 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.987 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.987 crypto/mlx5: not in enabled drivers build config 00:02:37.987 crypto/mvsam: not in enabled drivers build config 00:02:37.987 crypto/nitrox: not in enabled drivers build config 00:02:37.987 crypto/null: not in enabled drivers build config 00:02:37.987 crypto/octeontx: not in enabled drivers build config 00:02:37.987 crypto/openssl: not in enabled drivers build config 00:02:37.987 crypto/scheduler: not in enabled drivers build config 00:02:37.987 crypto/uadk: not in enabled drivers build config 00:02:37.987 crypto/virtio: not in enabled drivers build config 00:02:37.987 compress/isal: not in enabled drivers build config 00:02:37.987 compress/mlx5: not in enabled drivers build config 00:02:37.987 compress/nitrox: not in enabled drivers build config 00:02:37.987 compress/octeontx: not in enabled drivers build config 00:02:37.987 compress/zlib: not in enabled drivers build config 00:02:37.987 regex/*: missing internal dependency, "regexdev" 00:02:37.987 ml/*: missing internal dependency, "mldev" 00:02:37.987 vdpa/ifc: not in enabled drivers build config 00:02:37.987 vdpa/mlx5: not in enabled drivers build config 00:02:37.987 vdpa/nfp: not in enabled drivers build config 00:02:37.988 vdpa/sfc: not in enabled drivers build config 00:02:37.988 event/*: missing internal dependency, "eventdev" 00:02:37.988 baseband/*: missing internal dependency, "bbdev" 00:02:37.988 gpu/*: missing internal dependency, "gpudev" 00:02:37.988 00:02:37.988 00:02:37.988 Build targets in project: 85 00:02:37.988 00:02:37.988 DPDK 24.03.0 00:02:37.988 00:02:37.988 User defined options 00:02:37.988 buildtype : debug 00:02:37.988 default_library : shared 00:02:37.988 libdir : lib 00:02:37.988 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:37.988 b_sanitize : address 00:02:37.988 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:37.988 c_link_args : 00:02:37.988 cpu_instruction_set: native 00:02:37.988 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:37.988 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:37.988 enable_docs : false 00:02:37.988 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:37.988 enable_kmods : false 00:02:37.988 max_lcores : 128 00:02:37.988 tests : false 00:02:37.988 00:02:37.988 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.988 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:37.988 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.988 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.988 [3/268] Linking static target lib/librte_kvargs.a 00:02:37.988 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.988 [5/268] Linking static target lib/librte_log.a 00:02:37.988 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.988 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.988 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.246 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.246 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:38.246 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:38.246 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.246 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.246 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.246 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.246 [16/268] Linking static target lib/librte_telemetry.a 00:02:38.246 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.504 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.763 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.763 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.763 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.763 [22/268] Linking target lib/librte_log.so.24.1 00:02:38.763 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.763 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.763 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.763 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.022 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.022 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:39.022 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:39.022 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.022 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.022 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.022 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.280 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:39.280 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.280 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:39.280 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.280 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.539 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.539 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:39.539 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.539 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.539 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.539 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.798 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.798 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.798 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.798 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.056 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.056 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.056 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.056 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.315 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.315 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.315 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.315 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.315 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.315 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.315 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.573 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.573 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.832 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.832 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.832 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.832 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.832 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.832 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.089 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.089 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.348 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:41.348 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.348 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:41.348 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.348 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:41.348 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.607 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:41.607 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:41.607 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:41.607 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.607 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.865 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:41.865 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:41.865 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.865 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.124 [85/268] Linking static target lib/librte_eal.a 00:02:42.124 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.124 [87/268] Linking static target lib/librte_ring.a 00:02:42.124 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:42.382 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.382 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.382 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.382 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.382 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:42.382 [94/268] Linking static target lib/librte_rcu.a 00:02:42.382 [95/268] Linking static target lib/librte_mempool.a 00:02:42.641 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.641 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.641 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.900 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.900 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:42.900 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.900 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:42.900 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.900 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.158 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.158 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.158 [107/268] Linking static target lib/librte_net.a 00:02:43.416 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.416 [109/268] Linking static target lib/librte_mbuf.a 00:02:43.416 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.416 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.416 [112/268] Linking static target lib/librte_meter.a 00:02:43.675 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:43.675 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.675 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.675 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.675 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.935 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.194 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.194 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.194 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.454 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.454 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.454 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.713 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.713 [126/268] Linking static target lib/librte_pci.a 00:02:44.713 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.713 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.713 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.971 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.971 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.971 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.971 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.971 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.971 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.971 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.971 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.971 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.230 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.230 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.230 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.230 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.230 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.230 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.230 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.230 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.230 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.230 [148/268] Linking static target lib/librte_cmdline.a 00:02:45.488 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.488 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.488 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.747 [152/268] Linking static target lib/librte_timer.a 00:02:45.747 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.747 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.005 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.005 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.264 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.265 [158/268] Linking static target lib/librte_compressdev.a 00:02:46.265 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.265 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.265 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.523 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.524 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.524 [164/268] Linking static target lib/librte_ethdev.a 00:02:46.524 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.524 [166/268] Linking static target lib/librte_dmadev.a 00:02:46.524 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.782 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.782 [169/268] Linking static target lib/librte_hash.a 00:02:46.782 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.782 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.041 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.041 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.041 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.299 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.299 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.299 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:47.557 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.557 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.557 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.557 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.816 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.816 [183/268] Linking static target lib/librte_cryptodev.a 00:02:47.816 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.816 [185/268] Linking static target lib/librte_power.a 00:02:47.816 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:48.074 [187/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.074 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.074 [189/268] Linking static target lib/librte_reorder.a 00:02:48.333 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:48.333 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:48.333 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.333 [193/268] Linking static target lib/librte_security.a 00:02:48.906 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.906 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.191 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.191 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.191 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.191 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:49.456 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.714 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:49.714 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.714 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.714 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.973 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.973 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.973 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.231 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.231 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:50.231 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:50.490 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.490 [212/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.490 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.490 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.490 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.490 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:50.490 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.490 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.490 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:50.490 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.490 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.749 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.749 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.749 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.750 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.750 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:51.008 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.945 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.323 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.323 [230/268] Linking target lib/librte_eal.so.24.1 00:02:53.582 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.582 [232/268] Linking target lib/librte_ring.so.24.1 00:02:53.582 [233/268] Linking target lib/librte_meter.so.24.1 00:02:53.582 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:53.582 [235/268] Linking target lib/librte_pci.so.24.1 00:02:53.582 [236/268] Linking target lib/librte_timer.so.24.1 00:02:53.582 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.842 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.842 [239/268] Linking target lib/librte_mempool.so.24.1 00:02:53.842 [240/268] Linking target lib/librte_rcu.so.24.1 00:02:53.842 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.842 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.842 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.842 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.842 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.842 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.842 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.842 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:54.102 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:54.102 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:54.102 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:54.102 [252/268] Linking target lib/librte_net.so.24.1 00:02:54.102 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:54.102 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:54.361 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:54.361 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:54.361 [257/268] Linking target lib/librte_security.so.24.1 00:02:54.361 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:54.361 [259/268] Linking target lib/librte_hash.so.24.1 00:02:54.620 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:56.001 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.001 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:56.001 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:56.001 [264/268] Linking target lib/librte_power.so.24.1 00:02:57.392 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.392 [266/268] Linking static target lib/librte_vhost.a 00:02:59.921 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.921 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:59.921 INFO: autodetecting backend as ninja 00:02:59.921 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:18.009 CC lib/ut/ut.o 00:03:18.009 CC lib/log/log.o 00:03:18.009 CC lib/log/log_flags.o 00:03:18.009 CC lib/log/log_deprecated.o 00:03:18.009 CC lib/ut_mock/mock.o 00:03:18.009 LIB libspdk_ut.a 00:03:18.009 LIB libspdk_log.a 00:03:18.009 LIB libspdk_ut_mock.a 00:03:18.009 SO libspdk_ut.so.2.0 00:03:18.009 SO libspdk_log.so.7.1 00:03:18.009 SO libspdk_ut_mock.so.6.0 00:03:18.009 SYMLINK libspdk_ut.so 00:03:18.009 SYMLINK libspdk_log.so 00:03:18.009 SYMLINK libspdk_ut_mock.so 00:03:18.009 CXX lib/trace_parser/trace.o 00:03:18.009 CC lib/util/base64.o 00:03:18.009 CC lib/util/crc32.o 00:03:18.009 CC lib/util/crc32c.o 00:03:18.009 CC lib/util/bit_array.o 00:03:18.009 CC lib/util/cpuset.o 00:03:18.009 CC lib/util/crc16.o 00:03:18.009 CC lib/dma/dma.o 00:03:18.009 CC lib/ioat/ioat.o 00:03:18.009 CC lib/vfio_user/host/vfio_user_pci.o 00:03:18.009 CC lib/util/crc32_ieee.o 00:03:18.009 CC lib/vfio_user/host/vfio_user.o 00:03:18.009 CC lib/util/crc64.o 00:03:18.009 CC lib/util/dif.o 00:03:18.009 CC lib/util/fd.o 00:03:18.009 CC lib/util/fd_group.o 00:03:18.009 LIB libspdk_dma.a 00:03:18.009 SO libspdk_dma.so.5.0 00:03:18.009 CC lib/util/file.o 00:03:18.009 CC lib/util/hexlify.o 00:03:18.009 SYMLINK libspdk_dma.so 00:03:18.009 CC lib/util/iov.o 00:03:18.009 LIB libspdk_ioat.a 00:03:18.009 SO libspdk_ioat.so.7.0 00:03:18.009 CC lib/util/math.o 00:03:18.009 CC lib/util/net.o 00:03:18.009 LIB libspdk_vfio_user.a 00:03:18.009 SYMLINK libspdk_ioat.so 00:03:18.009 CC lib/util/pipe.o 00:03:18.009 SO libspdk_vfio_user.so.5.0 00:03:18.009 CC lib/util/strerror_tls.o 00:03:18.009 CC lib/util/string.o 00:03:18.009 SYMLINK libspdk_vfio_user.so 00:03:18.009 CC lib/util/uuid.o 00:03:18.009 CC lib/util/xor.o 00:03:18.268 CC lib/util/zipf.o 00:03:18.268 CC lib/util/md5.o 00:03:18.526 LIB libspdk_util.a 00:03:18.526 SO libspdk_util.so.10.1 00:03:18.786 LIB libspdk_trace_parser.a 00:03:18.786 SYMLINK libspdk_util.so 00:03:18.786 SO libspdk_trace_parser.so.6.0 00:03:18.786 SYMLINK libspdk_trace_parser.so 00:03:18.786 CC lib/vmd/vmd.o 00:03:18.786 CC lib/conf/conf.o 00:03:18.786 CC lib/vmd/led.o 00:03:18.786 CC lib/env_dpdk/env.o 00:03:18.786 CC lib/env_dpdk/pci.o 00:03:18.786 CC lib/env_dpdk/memory.o 00:03:18.786 CC lib/env_dpdk/init.o 00:03:19.045 CC lib/rdma_utils/rdma_utils.o 00:03:19.045 CC lib/json/json_parse.o 00:03:19.045 CC lib/idxd/idxd.o 00:03:19.045 CC lib/idxd/idxd_user.o 00:03:19.045 LIB libspdk_conf.a 00:03:19.045 CC lib/json/json_util.o 00:03:19.304 SO libspdk_conf.so.6.0 00:03:19.304 LIB libspdk_rdma_utils.a 00:03:19.304 SO libspdk_rdma_utils.so.1.0 00:03:19.304 SYMLINK libspdk_conf.so 00:03:19.304 CC lib/env_dpdk/threads.o 00:03:19.304 SYMLINK libspdk_rdma_utils.so 00:03:19.304 CC lib/env_dpdk/pci_ioat.o 00:03:19.304 CC lib/json/json_write.o 00:03:19.304 CC lib/idxd/idxd_kernel.o 00:03:19.304 CC lib/env_dpdk/pci_virtio.o 00:03:19.304 CC lib/env_dpdk/pci_vmd.o 00:03:19.304 CC lib/env_dpdk/pci_idxd.o 00:03:19.304 CC lib/rdma_provider/common.o 00:03:19.562 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:19.562 CC lib/env_dpdk/pci_event.o 00:03:19.562 CC lib/env_dpdk/sigbus_handler.o 00:03:19.562 CC lib/env_dpdk/pci_dpdk.o 00:03:19.562 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:19.562 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:19.562 LIB libspdk_json.a 00:03:19.562 LIB libspdk_vmd.a 00:03:19.822 SO libspdk_json.so.6.0 00:03:19.822 SO libspdk_vmd.so.6.0 00:03:19.822 LIB libspdk_rdma_provider.a 00:03:19.822 LIB libspdk_idxd.a 00:03:19.822 SO libspdk_rdma_provider.so.7.0 00:03:19.822 SYMLINK libspdk_json.so 00:03:19.822 SYMLINK libspdk_vmd.so 00:03:19.822 SO libspdk_idxd.so.12.1 00:03:19.822 SYMLINK libspdk_rdma_provider.so 00:03:19.822 SYMLINK libspdk_idxd.so 00:03:20.080 CC lib/jsonrpc/jsonrpc_server.o 00:03:20.080 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:20.080 CC lib/jsonrpc/jsonrpc_client.o 00:03:20.080 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:20.339 LIB libspdk_jsonrpc.a 00:03:20.339 SO libspdk_jsonrpc.so.6.0 00:03:20.597 SYMLINK libspdk_jsonrpc.so 00:03:20.856 CC lib/rpc/rpc.o 00:03:20.856 LIB libspdk_env_dpdk.a 00:03:21.115 SO libspdk_env_dpdk.so.15.1 00:03:21.115 LIB libspdk_rpc.a 00:03:21.115 SYMLINK libspdk_env_dpdk.so 00:03:21.115 SO libspdk_rpc.so.6.0 00:03:21.373 SYMLINK libspdk_rpc.so 00:03:21.632 CC lib/notify/notify.o 00:03:21.632 CC lib/notify/notify_rpc.o 00:03:21.632 CC lib/keyring/keyring.o 00:03:21.632 CC lib/keyring/keyring_rpc.o 00:03:21.632 CC lib/trace/trace.o 00:03:21.632 CC lib/trace/trace_rpc.o 00:03:21.632 CC lib/trace/trace_flags.o 00:03:21.892 LIB libspdk_notify.a 00:03:21.892 SO libspdk_notify.so.6.0 00:03:21.892 SYMLINK libspdk_notify.so 00:03:21.892 LIB libspdk_keyring.a 00:03:21.892 LIB libspdk_trace.a 00:03:22.152 SO libspdk_keyring.so.2.0 00:03:22.152 SO libspdk_trace.so.11.0 00:03:22.152 SYMLINK libspdk_keyring.so 00:03:22.152 SYMLINK libspdk_trace.so 00:03:22.721 CC lib/thread/thread.o 00:03:22.721 CC lib/thread/iobuf.o 00:03:22.721 CC lib/sock/sock.o 00:03:22.721 CC lib/sock/sock_rpc.o 00:03:22.978 LIB libspdk_sock.a 00:03:23.237 SO libspdk_sock.so.10.0 00:03:23.237 SYMLINK libspdk_sock.so 00:03:23.496 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:23.496 CC lib/nvme/nvme_ns.o 00:03:23.496 CC lib/nvme/nvme_ctrlr.o 00:03:23.496 CC lib/nvme/nvme_fabric.o 00:03:23.496 CC lib/nvme/nvme_ns_cmd.o 00:03:23.496 CC lib/nvme/nvme_qpair.o 00:03:23.496 CC lib/nvme/nvme_pcie_common.o 00:03:23.496 CC lib/nvme/nvme_pcie.o 00:03:23.760 CC lib/nvme/nvme.o 00:03:24.335 CC lib/nvme/nvme_quirks.o 00:03:24.335 CC lib/nvme/nvme_transport.o 00:03:24.594 CC lib/nvme/nvme_discovery.o 00:03:24.594 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.594 LIB libspdk_thread.a 00:03:24.594 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.594 SO libspdk_thread.so.11.0 00:03:24.594 CC lib/nvme/nvme_tcp.o 00:03:24.852 SYMLINK libspdk_thread.so 00:03:24.852 CC lib/nvme/nvme_opal.o 00:03:24.852 CC lib/accel/accel.o 00:03:24.852 CC lib/nvme/nvme_io_msg.o 00:03:25.111 CC lib/nvme/nvme_poll_group.o 00:03:25.111 CC lib/nvme/nvme_zns.o 00:03:25.111 CC lib/accel/accel_rpc.o 00:03:25.369 CC lib/nvme/nvme_stubs.o 00:03:25.369 CC lib/accel/accel_sw.o 00:03:25.369 CC lib/nvme/nvme_auth.o 00:03:25.369 CC lib/nvme/nvme_cuse.o 00:03:25.629 CC lib/nvme/nvme_rdma.o 00:03:25.887 CC lib/blob/blobstore.o 00:03:25.887 CC lib/init/json_config.o 00:03:25.887 CC lib/virtio/virtio.o 00:03:25.887 CC lib/fsdev/fsdev.o 00:03:26.145 CC lib/init/subsystem.o 00:03:26.145 CC lib/fsdev/fsdev_io.o 00:03:26.145 CC lib/virtio/virtio_vhost_user.o 00:03:26.403 CC lib/virtio/virtio_vfio_user.o 00:03:26.403 CC lib/init/subsystem_rpc.o 00:03:26.403 LIB libspdk_accel.a 00:03:26.403 CC lib/virtio/virtio_pci.o 00:03:26.403 SO libspdk_accel.so.16.0 00:03:26.403 SYMLINK libspdk_accel.so 00:03:26.403 CC lib/init/rpc.o 00:03:26.403 CC lib/blob/request.o 00:03:26.662 CC lib/blob/zeroes.o 00:03:26.662 CC lib/blob/blob_bs_dev.o 00:03:26.662 CC lib/fsdev/fsdev_rpc.o 00:03:26.662 LIB libspdk_init.a 00:03:26.662 LIB libspdk_virtio.a 00:03:26.662 SO libspdk_init.so.6.0 00:03:26.662 SO libspdk_virtio.so.7.0 00:03:26.920 SYMLINK libspdk_init.so 00:03:26.920 LIB libspdk_fsdev.a 00:03:26.920 SYMLINK libspdk_virtio.so 00:03:26.920 CC lib/bdev/bdev.o 00:03:26.920 SO libspdk_fsdev.so.2.0 00:03:26.920 CC lib/bdev/bdev_rpc.o 00:03:26.920 CC lib/bdev/bdev_zone.o 00:03:26.920 CC lib/bdev/part.o 00:03:26.920 CC lib/bdev/scsi_nvme.o 00:03:26.920 SYMLINK libspdk_fsdev.so 00:03:26.920 CC lib/event/app.o 00:03:26.920 CC lib/event/reactor.o 00:03:27.180 CC lib/event/log_rpc.o 00:03:27.180 CC lib/event/app_rpc.o 00:03:27.180 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:27.180 CC lib/event/scheduler_static.o 00:03:27.438 LIB libspdk_nvme.a 00:03:27.697 LIB libspdk_event.a 00:03:27.697 SO libspdk_event.so.14.0 00:03:27.697 SO libspdk_nvme.so.15.0 00:03:27.697 SYMLINK libspdk_event.so 00:03:27.956 LIB libspdk_fuse_dispatcher.a 00:03:27.956 SO libspdk_fuse_dispatcher.so.1.0 00:03:27.956 SYMLINK libspdk_fuse_dispatcher.so 00:03:27.956 SYMLINK libspdk_nvme.so 00:03:30.506 LIB libspdk_blob.a 00:03:30.506 LIB libspdk_bdev.a 00:03:30.506 SO libspdk_blob.so.12.0 00:03:30.506 SO libspdk_bdev.so.17.0 00:03:30.506 SYMLINK libspdk_blob.so 00:03:30.506 SYMLINK libspdk_bdev.so 00:03:30.766 CC lib/lvol/lvol.o 00:03:30.766 CC lib/blobfs/blobfs.o 00:03:30.766 CC lib/blobfs/tree.o 00:03:30.766 CC lib/scsi/dev.o 00:03:30.766 CC lib/scsi/port.o 00:03:30.766 CC lib/scsi/lun.o 00:03:30.766 CC lib/ublk/ublk.o 00:03:30.766 CC lib/nvmf/ctrlr.o 00:03:30.766 CC lib/nbd/nbd.o 00:03:30.766 CC lib/ftl/ftl_core.o 00:03:31.024 CC lib/ftl/ftl_init.o 00:03:31.024 CC lib/ftl/ftl_layout.o 00:03:31.024 CC lib/ftl/ftl_debug.o 00:03:31.284 CC lib/scsi/scsi.o 00:03:31.284 CC lib/ublk/ublk_rpc.o 00:03:31.284 CC lib/scsi/scsi_bdev.o 00:03:31.284 CC lib/nbd/nbd_rpc.o 00:03:31.284 CC lib/ftl/ftl_io.o 00:03:31.284 CC lib/nvmf/ctrlr_discovery.o 00:03:31.284 CC lib/scsi/scsi_pr.o 00:03:31.284 CC lib/scsi/scsi_rpc.o 00:03:31.542 LIB libspdk_nbd.a 00:03:31.542 SO libspdk_nbd.so.7.0 00:03:31.542 LIB libspdk_ublk.a 00:03:31.542 CC lib/ftl/ftl_sb.o 00:03:31.542 SYMLINK libspdk_nbd.so 00:03:31.542 CC lib/nvmf/ctrlr_bdev.o 00:03:31.542 CC lib/nvmf/subsystem.o 00:03:31.542 SO libspdk_ublk.so.3.0 00:03:31.801 SYMLINK libspdk_ublk.so 00:03:31.801 CC lib/scsi/task.o 00:03:31.801 CC lib/nvmf/nvmf.o 00:03:31.801 LIB libspdk_blobfs.a 00:03:31.801 CC lib/ftl/ftl_l2p.o 00:03:31.801 SO libspdk_blobfs.so.11.0 00:03:32.060 LIB libspdk_lvol.a 00:03:32.060 CC lib/ftl/ftl_l2p_flat.o 00:03:32.060 SYMLINK libspdk_blobfs.so 00:03:32.060 CC lib/ftl/ftl_nv_cache.o 00:03:32.060 CC lib/ftl/ftl_band.o 00:03:32.060 LIB libspdk_scsi.a 00:03:32.060 SO libspdk_lvol.so.11.0 00:03:32.060 CC lib/ftl/ftl_band_ops.o 00:03:32.060 SO libspdk_scsi.so.9.0 00:03:32.060 SYMLINK libspdk_lvol.so 00:03:32.060 CC lib/ftl/ftl_writer.o 00:03:32.060 CC lib/ftl/ftl_rq.o 00:03:32.060 SYMLINK libspdk_scsi.so 00:03:32.060 CC lib/ftl/ftl_reloc.o 00:03:32.318 CC lib/ftl/ftl_l2p_cache.o 00:03:32.318 CC lib/nvmf/nvmf_rpc.o 00:03:32.576 CC lib/ftl/ftl_p2l.o 00:03:32.576 CC lib/iscsi/conn.o 00:03:32.576 CC lib/iscsi/init_grp.o 00:03:32.576 CC lib/vhost/vhost.o 00:03:32.841 CC lib/nvmf/transport.o 00:03:32.841 CC lib/nvmf/tcp.o 00:03:32.841 CC lib/iscsi/iscsi.o 00:03:32.841 CC lib/iscsi/param.o 00:03:33.101 CC lib/ftl/ftl_p2l_log.o 00:03:33.101 CC lib/nvmf/stubs.o 00:03:33.102 CC lib/iscsi/portal_grp.o 00:03:33.360 CC lib/iscsi/tgt_node.o 00:03:33.360 CC lib/ftl/mngt/ftl_mngt.o 00:03:33.360 CC lib/nvmf/mdns_server.o 00:03:33.619 CC lib/vhost/vhost_rpc.o 00:03:33.619 CC lib/vhost/vhost_scsi.o 00:03:33.619 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:33.619 CC lib/nvmf/rdma.o 00:03:33.619 CC lib/nvmf/auth.o 00:03:33.619 CC lib/iscsi/iscsi_subsystem.o 00:03:33.878 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:33.878 CC lib/iscsi/iscsi_rpc.o 00:03:33.878 CC lib/iscsi/task.o 00:03:33.878 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:33.878 CC lib/vhost/vhost_blk.o 00:03:34.137 CC lib/vhost/rte_vhost_user.o 00:03:34.137 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:34.137 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:34.137 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:34.396 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:34.396 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:34.396 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:34.396 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:34.655 LIB libspdk_iscsi.a 00:03:34.655 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:34.655 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:34.655 SO libspdk_iscsi.so.8.0 00:03:34.655 CC lib/ftl/utils/ftl_conf.o 00:03:34.915 CC lib/ftl/utils/ftl_md.o 00:03:34.915 CC lib/ftl/utils/ftl_mempool.o 00:03:34.915 SYMLINK libspdk_iscsi.so 00:03:34.915 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.915 CC lib/ftl/utils/ftl_property.o 00:03:34.915 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.915 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:34.915 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:35.175 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:35.175 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:35.175 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:35.175 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:35.175 LIB libspdk_vhost.a 00:03:35.175 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:35.175 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:35.175 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:35.175 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:35.175 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:35.434 SO libspdk_vhost.so.8.0 00:03:35.434 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:35.434 CC lib/ftl/base/ftl_base_dev.o 00:03:35.434 CC lib/ftl/base/ftl_base_bdev.o 00:03:35.434 SYMLINK libspdk_vhost.so 00:03:35.434 CC lib/ftl/ftl_trace.o 00:03:35.693 LIB libspdk_ftl.a 00:03:35.953 SO libspdk_ftl.so.9.0 00:03:36.212 LIB libspdk_nvmf.a 00:03:36.212 SYMLINK libspdk_ftl.so 00:03:36.471 SO libspdk_nvmf.so.20.0 00:03:36.731 SYMLINK libspdk_nvmf.so 00:03:37.301 CC module/env_dpdk/env_dpdk_rpc.o 00:03:37.301 CC module/accel/iaa/accel_iaa.o 00:03:37.301 CC module/blob/bdev/blob_bdev.o 00:03:37.301 CC module/sock/posix/posix.o 00:03:37.301 CC module/accel/error/accel_error.o 00:03:37.301 CC module/accel/ioat/accel_ioat.o 00:03:37.301 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:37.301 CC module/keyring/file/keyring.o 00:03:37.301 CC module/fsdev/aio/fsdev_aio.o 00:03:37.301 CC module/accel/dsa/accel_dsa.o 00:03:37.301 LIB libspdk_env_dpdk_rpc.a 00:03:37.301 SO libspdk_env_dpdk_rpc.so.6.0 00:03:37.301 SYMLINK libspdk_env_dpdk_rpc.so 00:03:37.301 CC module/keyring/file/keyring_rpc.o 00:03:37.560 CC module/accel/ioat/accel_ioat_rpc.o 00:03:37.560 CC module/accel/iaa/accel_iaa_rpc.o 00:03:37.560 CC module/accel/error/accel_error_rpc.o 00:03:37.560 LIB libspdk_scheduler_dynamic.a 00:03:37.560 LIB libspdk_keyring_file.a 00:03:37.560 SO libspdk_scheduler_dynamic.so.4.0 00:03:37.560 SO libspdk_keyring_file.so.2.0 00:03:37.560 LIB libspdk_blob_bdev.a 00:03:37.560 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:37.560 CC module/accel/dsa/accel_dsa_rpc.o 00:03:37.560 SYMLINK libspdk_scheduler_dynamic.so 00:03:37.560 SO libspdk_blob_bdev.so.12.0 00:03:37.560 LIB libspdk_accel_ioat.a 00:03:37.560 LIB libspdk_accel_iaa.a 00:03:37.560 SYMLINK libspdk_keyring_file.so 00:03:37.560 LIB libspdk_accel_error.a 00:03:37.560 SO libspdk_accel_ioat.so.6.0 00:03:37.560 SO libspdk_accel_iaa.so.3.0 00:03:37.560 SO libspdk_accel_error.so.2.0 00:03:37.560 SYMLINK libspdk_blob_bdev.so 00:03:37.819 SYMLINK libspdk_accel_ioat.so 00:03:37.819 SYMLINK libspdk_accel_iaa.so 00:03:37.819 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:37.819 LIB libspdk_accel_dsa.a 00:03:37.819 SYMLINK libspdk_accel_error.so 00:03:37.819 LIB libspdk_scheduler_dpdk_governor.a 00:03:37.819 CC module/fsdev/aio/linux_aio_mgr.o 00:03:37.819 CC module/scheduler/gscheduler/gscheduler.o 00:03:37.819 SO libspdk_accel_dsa.so.5.0 00:03:37.819 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:37.819 CC module/keyring/linux/keyring.o 00:03:37.819 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:37.819 SYMLINK libspdk_accel_dsa.so 00:03:37.819 CC module/keyring/linux/keyring_rpc.o 00:03:38.078 LIB libspdk_scheduler_gscheduler.a 00:03:38.078 CC module/bdev/delay/vbdev_delay.o 00:03:38.078 SO libspdk_scheduler_gscheduler.so.4.0 00:03:38.078 LIB libspdk_keyring_linux.a 00:03:38.078 CC module/blobfs/bdev/blobfs_bdev.o 00:03:38.078 CC module/bdev/error/vbdev_error.o 00:03:38.078 SYMLINK libspdk_scheduler_gscheduler.so 00:03:38.078 SO libspdk_keyring_linux.so.1.0 00:03:38.078 LIB libspdk_fsdev_aio.a 00:03:38.078 CC module/bdev/gpt/gpt.o 00:03:38.078 CC module/bdev/lvol/vbdev_lvol.o 00:03:38.078 SO libspdk_fsdev_aio.so.1.0 00:03:38.078 SYMLINK libspdk_keyring_linux.so 00:03:38.078 CC module/bdev/malloc/bdev_malloc.o 00:03:38.339 LIB libspdk_sock_posix.a 00:03:38.339 SYMLINK libspdk_fsdev_aio.so 00:03:38.339 CC module/bdev/error/vbdev_error_rpc.o 00:03:38.339 CC module/bdev/null/bdev_null.o 00:03:38.339 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:38.339 SO libspdk_sock_posix.so.6.0 00:03:38.339 CC module/bdev/gpt/vbdev_gpt.o 00:03:38.339 CC module/bdev/nvme/bdev_nvme.o 00:03:38.339 SYMLINK libspdk_sock_posix.so 00:03:38.339 LIB libspdk_bdev_error.a 00:03:38.339 SO libspdk_bdev_error.so.6.0 00:03:38.606 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:38.606 LIB libspdk_blobfs_bdev.a 00:03:38.606 SO libspdk_blobfs_bdev.so.6.0 00:03:38.606 SYMLINK libspdk_bdev_error.so 00:03:38.606 CC module/bdev/null/bdev_null_rpc.o 00:03:38.606 CC module/bdev/raid/bdev_raid.o 00:03:38.606 CC module/bdev/passthru/vbdev_passthru.o 00:03:38.606 SYMLINK libspdk_blobfs_bdev.so 00:03:38.606 CC module/bdev/raid/bdev_raid_rpc.o 00:03:38.606 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.606 LIB libspdk_bdev_gpt.a 00:03:38.606 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:38.606 SO libspdk_bdev_gpt.so.6.0 00:03:38.606 LIB libspdk_bdev_delay.a 00:03:38.876 LIB libspdk_bdev_null.a 00:03:38.876 SO libspdk_bdev_delay.so.6.0 00:03:38.876 SYMLINK libspdk_bdev_gpt.so 00:03:38.876 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:38.876 SO libspdk_bdev_null.so.6.0 00:03:38.876 SYMLINK libspdk_bdev_delay.so 00:03:38.876 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:38.876 SYMLINK libspdk_bdev_null.so 00:03:38.876 CC module/bdev/nvme/nvme_rpc.o 00:03:38.876 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.876 CC module/bdev/nvme/vbdev_opal.o 00:03:38.876 LIB libspdk_bdev_malloc.a 00:03:38.876 SO libspdk_bdev_malloc.so.6.0 00:03:38.876 LIB libspdk_bdev_passthru.a 00:03:38.876 SYMLINK libspdk_bdev_malloc.so 00:03:39.137 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.137 SO libspdk_bdev_passthru.so.6.0 00:03:39.137 CC module/bdev/split/vbdev_split.o 00:03:39.137 SYMLINK libspdk_bdev_passthru.so 00:03:39.137 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.137 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.137 LIB libspdk_bdev_lvol.a 00:03:39.399 SO libspdk_bdev_lvol.so.6.0 00:03:39.399 CC module/bdev/aio/bdev_aio.o 00:03:39.399 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.399 SYMLINK libspdk_bdev_lvol.so 00:03:39.399 CC module/bdev/ftl/bdev_ftl.o 00:03:39.399 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.399 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.658 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.658 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.658 LIB libspdk_bdev_zone_block.a 00:03:39.658 LIB libspdk_bdev_split.a 00:03:39.658 CC module/bdev/raid/raid0.o 00:03:39.658 SO libspdk_bdev_split.so.6.0 00:03:39.658 SO libspdk_bdev_zone_block.so.6.0 00:03:39.658 SYMLINK libspdk_bdev_split.so 00:03:39.658 CC module/bdev/raid/raid1.o 00:03:39.658 SYMLINK libspdk_bdev_zone_block.so 00:03:39.658 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.658 CC module/bdev/raid/concat.o 00:03:39.658 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.917 CC module/bdev/raid/raid5f.o 00:03:39.917 LIB libspdk_bdev_aio.a 00:03:39.917 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.917 SO libspdk_bdev_aio.so.6.0 00:03:39.917 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.917 LIB libspdk_bdev_ftl.a 00:03:39.917 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.917 SYMLINK libspdk_bdev_aio.so 00:03:39.917 SO libspdk_bdev_ftl.so.6.0 00:03:40.176 SYMLINK libspdk_bdev_ftl.so 00:03:40.176 LIB libspdk_bdev_iscsi.a 00:03:40.176 SO libspdk_bdev_iscsi.so.6.0 00:03:40.176 SYMLINK libspdk_bdev_iscsi.so 00:03:40.176 LIB libspdk_bdev_virtio.a 00:03:40.436 SO libspdk_bdev_virtio.so.6.0 00:03:40.436 LIB libspdk_bdev_raid.a 00:03:40.436 SYMLINK libspdk_bdev_virtio.so 00:03:40.436 SO libspdk_bdev_raid.so.6.0 00:03:40.695 SYMLINK libspdk_bdev_raid.so 00:03:41.632 LIB libspdk_bdev_nvme.a 00:03:41.891 SO libspdk_bdev_nvme.so.7.1 00:03:41.891 SYMLINK libspdk_bdev_nvme.so 00:03:42.459 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.459 CC module/event/subsystems/vmd/vmd.o 00:03:42.459 CC module/event/subsystems/sock/sock.o 00:03:42.459 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.459 CC module/event/subsystems/keyring/keyring.o 00:03:42.459 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.459 CC module/event/subsystems/fsdev/fsdev.o 00:03:42.459 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.459 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.719 LIB libspdk_event_vhost_blk.a 00:03:42.719 LIB libspdk_event_vmd.a 00:03:42.719 LIB libspdk_event_keyring.a 00:03:42.719 LIB libspdk_event_sock.a 00:03:42.719 LIB libspdk_event_fsdev.a 00:03:42.719 LIB libspdk_event_scheduler.a 00:03:42.719 SO libspdk_event_vhost_blk.so.3.0 00:03:42.719 SO libspdk_event_vmd.so.6.0 00:03:42.719 SO libspdk_event_sock.so.5.0 00:03:42.719 SO libspdk_event_fsdev.so.1.0 00:03:42.719 SO libspdk_event_keyring.so.1.0 00:03:42.719 SO libspdk_event_scheduler.so.4.0 00:03:42.719 SYMLINK libspdk_event_vhost_blk.so 00:03:42.719 LIB libspdk_event_iobuf.a 00:03:42.719 SYMLINK libspdk_event_keyring.so 00:03:42.719 SYMLINK libspdk_event_fsdev.so 00:03:42.719 SYMLINK libspdk_event_vmd.so 00:03:42.719 SYMLINK libspdk_event_sock.so 00:03:42.719 SYMLINK libspdk_event_scheduler.so 00:03:42.978 SO libspdk_event_iobuf.so.3.0 00:03:42.978 SYMLINK libspdk_event_iobuf.so 00:03:43.236 CC module/event/subsystems/accel/accel.o 00:03:43.497 LIB libspdk_event_accel.a 00:03:43.497 SO libspdk_event_accel.so.6.0 00:03:43.497 SYMLINK libspdk_event_accel.so 00:03:44.066 CC module/event/subsystems/bdev/bdev.o 00:03:44.326 LIB libspdk_event_bdev.a 00:03:44.326 SO libspdk_event_bdev.so.6.0 00:03:44.326 SYMLINK libspdk_event_bdev.so 00:03:44.585 CC module/event/subsystems/scsi/scsi.o 00:03:44.585 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:44.585 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:44.585 CC module/event/subsystems/ublk/ublk.o 00:03:44.845 CC module/event/subsystems/nbd/nbd.o 00:03:44.845 LIB libspdk_event_ublk.a 00:03:44.845 SO libspdk_event_ublk.so.3.0 00:03:44.845 LIB libspdk_event_scsi.a 00:03:44.845 LIB libspdk_event_nbd.a 00:03:44.845 SO libspdk_event_scsi.so.6.0 00:03:44.845 SO libspdk_event_nbd.so.6.0 00:03:44.845 SYMLINK libspdk_event_ublk.so 00:03:44.845 LIB libspdk_event_nvmf.a 00:03:44.845 SYMLINK libspdk_event_scsi.so 00:03:45.103 SYMLINK libspdk_event_nbd.so 00:03:45.103 SO libspdk_event_nvmf.so.6.0 00:03:45.103 SYMLINK libspdk_event_nvmf.so 00:03:45.361 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:45.361 CC module/event/subsystems/iscsi/iscsi.o 00:03:45.620 LIB libspdk_event_vhost_scsi.a 00:03:45.620 SO libspdk_event_vhost_scsi.so.3.0 00:03:45.620 LIB libspdk_event_iscsi.a 00:03:45.620 SO libspdk_event_iscsi.so.6.0 00:03:45.620 SYMLINK libspdk_event_vhost_scsi.so 00:03:45.620 SYMLINK libspdk_event_iscsi.so 00:03:45.880 SO libspdk.so.6.0 00:03:45.880 SYMLINK libspdk.so 00:03:46.139 CC test/rpc_client/rpc_client_test.o 00:03:46.139 CXX app/trace/trace.o 00:03:46.139 CC app/trace_record/trace_record.o 00:03:46.139 TEST_HEADER include/spdk/accel.h 00:03:46.139 TEST_HEADER include/spdk/accel_module.h 00:03:46.139 TEST_HEADER include/spdk/assert.h 00:03:46.139 TEST_HEADER include/spdk/barrier.h 00:03:46.139 TEST_HEADER include/spdk/base64.h 00:03:46.139 TEST_HEADER include/spdk/bdev.h 00:03:46.139 TEST_HEADER include/spdk/bdev_module.h 00:03:46.139 TEST_HEADER include/spdk/bdev_zone.h 00:03:46.139 TEST_HEADER include/spdk/bit_array.h 00:03:46.139 TEST_HEADER include/spdk/bit_pool.h 00:03:46.139 TEST_HEADER include/spdk/blob_bdev.h 00:03:46.139 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:46.139 TEST_HEADER include/spdk/blobfs.h 00:03:46.139 TEST_HEADER include/spdk/blob.h 00:03:46.139 TEST_HEADER include/spdk/conf.h 00:03:46.139 CC app/nvmf_tgt/nvmf_main.o 00:03:46.139 TEST_HEADER include/spdk/config.h 00:03:46.139 TEST_HEADER include/spdk/cpuset.h 00:03:46.139 TEST_HEADER include/spdk/crc16.h 00:03:46.139 TEST_HEADER include/spdk/crc32.h 00:03:46.139 TEST_HEADER include/spdk/crc64.h 00:03:46.139 TEST_HEADER include/spdk/dif.h 00:03:46.139 TEST_HEADER include/spdk/dma.h 00:03:46.139 TEST_HEADER include/spdk/endian.h 00:03:46.139 TEST_HEADER include/spdk/env_dpdk.h 00:03:46.139 TEST_HEADER include/spdk/env.h 00:03:46.139 TEST_HEADER include/spdk/event.h 00:03:46.139 TEST_HEADER include/spdk/fd_group.h 00:03:46.139 TEST_HEADER include/spdk/fd.h 00:03:46.139 TEST_HEADER include/spdk/file.h 00:03:46.139 TEST_HEADER include/spdk/fsdev.h 00:03:46.139 TEST_HEADER include/spdk/fsdev_module.h 00:03:46.139 TEST_HEADER include/spdk/ftl.h 00:03:46.139 TEST_HEADER include/spdk/gpt_spec.h 00:03:46.139 TEST_HEADER include/spdk/hexlify.h 00:03:46.139 TEST_HEADER include/spdk/histogram_data.h 00:03:46.139 CC examples/util/zipf/zipf.o 00:03:46.139 TEST_HEADER include/spdk/idxd.h 00:03:46.139 TEST_HEADER include/spdk/idxd_spec.h 00:03:46.139 TEST_HEADER include/spdk/init.h 00:03:46.139 TEST_HEADER include/spdk/ioat.h 00:03:46.139 CC test/thread/poller_perf/poller_perf.o 00:03:46.139 TEST_HEADER include/spdk/ioat_spec.h 00:03:46.139 TEST_HEADER include/spdk/iscsi_spec.h 00:03:46.139 TEST_HEADER include/spdk/json.h 00:03:46.139 TEST_HEADER include/spdk/jsonrpc.h 00:03:46.139 TEST_HEADER include/spdk/keyring.h 00:03:46.139 TEST_HEADER include/spdk/keyring_module.h 00:03:46.398 TEST_HEADER include/spdk/likely.h 00:03:46.398 TEST_HEADER include/spdk/log.h 00:03:46.398 TEST_HEADER include/spdk/lvol.h 00:03:46.398 TEST_HEADER include/spdk/md5.h 00:03:46.398 TEST_HEADER include/spdk/memory.h 00:03:46.398 TEST_HEADER include/spdk/mmio.h 00:03:46.398 TEST_HEADER include/spdk/nbd.h 00:03:46.398 TEST_HEADER include/spdk/net.h 00:03:46.398 TEST_HEADER include/spdk/notify.h 00:03:46.398 CC test/dma/test_dma/test_dma.o 00:03:46.398 TEST_HEADER include/spdk/nvme.h 00:03:46.398 TEST_HEADER include/spdk/nvme_intel.h 00:03:46.398 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:46.398 CC test/app/bdev_svc/bdev_svc.o 00:03:46.398 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:46.398 TEST_HEADER include/spdk/nvme_spec.h 00:03:46.398 TEST_HEADER include/spdk/nvme_zns.h 00:03:46.398 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:46.398 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:46.398 TEST_HEADER include/spdk/nvmf.h 00:03:46.398 TEST_HEADER include/spdk/nvmf_spec.h 00:03:46.398 TEST_HEADER include/spdk/nvmf_transport.h 00:03:46.398 TEST_HEADER include/spdk/opal.h 00:03:46.398 TEST_HEADER include/spdk/opal_spec.h 00:03:46.398 TEST_HEADER include/spdk/pci_ids.h 00:03:46.398 TEST_HEADER include/spdk/pipe.h 00:03:46.398 TEST_HEADER include/spdk/queue.h 00:03:46.398 TEST_HEADER include/spdk/reduce.h 00:03:46.398 TEST_HEADER include/spdk/rpc.h 00:03:46.398 TEST_HEADER include/spdk/scheduler.h 00:03:46.398 TEST_HEADER include/spdk/scsi.h 00:03:46.398 TEST_HEADER include/spdk/scsi_spec.h 00:03:46.398 TEST_HEADER include/spdk/sock.h 00:03:46.398 TEST_HEADER include/spdk/stdinc.h 00:03:46.398 TEST_HEADER include/spdk/string.h 00:03:46.398 TEST_HEADER include/spdk/thread.h 00:03:46.398 TEST_HEADER include/spdk/trace.h 00:03:46.398 TEST_HEADER include/spdk/trace_parser.h 00:03:46.398 TEST_HEADER include/spdk/tree.h 00:03:46.398 CC test/env/mem_callbacks/mem_callbacks.o 00:03:46.398 TEST_HEADER include/spdk/ublk.h 00:03:46.398 TEST_HEADER include/spdk/util.h 00:03:46.398 TEST_HEADER include/spdk/uuid.h 00:03:46.398 TEST_HEADER include/spdk/version.h 00:03:46.398 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:46.398 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:46.398 LINK rpc_client_test 00:03:46.398 TEST_HEADER include/spdk/vhost.h 00:03:46.398 TEST_HEADER include/spdk/vmd.h 00:03:46.398 TEST_HEADER include/spdk/xor.h 00:03:46.398 TEST_HEADER include/spdk/zipf.h 00:03:46.398 CXX test/cpp_headers/accel.o 00:03:46.398 LINK nvmf_tgt 00:03:46.398 LINK poller_perf 00:03:46.398 LINK zipf 00:03:46.398 LINK spdk_trace_record 00:03:46.398 LINK bdev_svc 00:03:46.398 CXX test/cpp_headers/accel_module.o 00:03:46.722 CXX test/cpp_headers/assert.o 00:03:46.722 LINK spdk_trace 00:03:46.722 CC test/env/vtophys/vtophys.o 00:03:46.722 CXX test/cpp_headers/barrier.o 00:03:46.722 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.722 CC test/env/memory/memory_ut.o 00:03:46.722 CC test/env/pci/pci_ut.o 00:03:46.722 CC examples/ioat/perf/perf.o 00:03:46.989 LINK test_dma 00:03:46.989 LINK vtophys 00:03:46.989 CXX test/cpp_headers/base64.o 00:03:46.989 LINK mem_callbacks 00:03:46.989 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:46.989 LINK env_dpdk_post_init 00:03:46.989 CC app/iscsi_tgt/iscsi_tgt.o 00:03:46.989 LINK ioat_perf 00:03:46.989 CXX test/cpp_headers/bdev.o 00:03:47.248 CC app/spdk_tgt/spdk_tgt.o 00:03:47.248 CC app/spdk_lspci/spdk_lspci.o 00:03:47.248 CC app/spdk_nvme_perf/perf.o 00:03:47.248 LINK iscsi_tgt 00:03:47.248 CC app/spdk_nvme_identify/identify.o 00:03:47.248 LINK pci_ut 00:03:47.248 CXX test/cpp_headers/bdev_module.o 00:03:47.248 CC examples/ioat/verify/verify.o 00:03:47.248 LINK spdk_lspci 00:03:47.248 LINK spdk_tgt 00:03:47.507 LINK nvme_fuzz 00:03:47.507 CXX test/cpp_headers/bdev_zone.o 00:03:47.507 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:47.507 LINK verify 00:03:47.507 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:47.507 CC app/spdk_nvme_discover/discovery_aer.o 00:03:47.507 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:47.507 CXX test/cpp_headers/bit_array.o 00:03:47.766 CC test/app/histogram_perf/histogram_perf.o 00:03:47.766 LINK spdk_nvme_discover 00:03:47.766 CXX test/cpp_headers/bit_pool.o 00:03:47.766 CC examples/vmd/lsvmd/lsvmd.o 00:03:47.766 CC examples/idxd/perf/perf.o 00:03:47.766 LINK histogram_perf 00:03:48.025 CXX test/cpp_headers/blob_bdev.o 00:03:48.025 LINK lsvmd 00:03:48.025 LINK memory_ut 00:03:48.025 LINK vhost_fuzz 00:03:48.025 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:48.025 LINK spdk_nvme_perf 00:03:48.025 CXX test/cpp_headers/blobfs_bdev.o 00:03:48.285 LINK spdk_nvme_identify 00:03:48.285 CC examples/vmd/led/led.o 00:03:48.285 LINK idxd_perf 00:03:48.285 LINK interrupt_tgt 00:03:48.285 CC examples/thread/thread/thread_ex.o 00:03:48.285 CXX test/cpp_headers/blobfs.o 00:03:48.285 LINK led 00:03:48.544 CC test/event/event_perf/event_perf.o 00:03:48.544 CC test/nvme/aer/aer.o 00:03:48.544 CC app/spdk_top/spdk_top.o 00:03:48.544 CC examples/sock/hello_world/hello_sock.o 00:03:48.544 CXX test/cpp_headers/blob.o 00:03:48.544 CC test/nvme/reset/reset.o 00:03:48.544 CC test/nvme/sgl/sgl.o 00:03:48.544 LINK thread 00:03:48.544 LINK event_perf 00:03:48.544 CC test/nvme/e2edp/nvme_dp.o 00:03:48.803 CXX test/cpp_headers/conf.o 00:03:48.803 LINK hello_sock 00:03:48.803 LINK aer 00:03:48.803 LINK reset 00:03:48.803 CC test/event/reactor/reactor.o 00:03:48.803 CXX test/cpp_headers/config.o 00:03:48.803 CXX test/cpp_headers/cpuset.o 00:03:48.803 CC test/nvme/overhead/overhead.o 00:03:48.803 LINK sgl 00:03:49.062 LINK nvme_dp 00:03:49.062 LINK reactor 00:03:49.062 CXX test/cpp_headers/crc16.o 00:03:49.062 CC examples/accel/perf/accel_perf.o 00:03:49.062 CC test/accel/dif/dif.o 00:03:49.321 LINK overhead 00:03:49.321 CXX test/cpp_headers/crc32.o 00:03:49.321 CC test/event/reactor_perf/reactor_perf.o 00:03:49.321 CC test/nvme/err_injection/err_injection.o 00:03:49.321 CC examples/blob/hello_world/hello_blob.o 00:03:49.321 CC examples/blob/cli/blobcli.o 00:03:49.321 CXX test/cpp_headers/crc64.o 00:03:49.321 LINK reactor_perf 00:03:49.581 LINK err_injection 00:03:49.581 CC test/app/jsoncat/jsoncat.o 00:03:49.581 LINK hello_blob 00:03:49.581 LINK iscsi_fuzz 00:03:49.581 CXX test/cpp_headers/dif.o 00:03:49.581 LINK jsoncat 00:03:49.840 CC test/nvme/startup/startup.o 00:03:49.840 CXX test/cpp_headers/dma.o 00:03:49.840 CC test/event/app_repeat/app_repeat.o 00:03:49.840 LINK spdk_top 00:03:49.840 CXX test/cpp_headers/endian.o 00:03:49.840 LINK accel_perf 00:03:49.840 CXX test/cpp_headers/env_dpdk.o 00:03:49.840 LINK blobcli 00:03:49.840 LINK app_repeat 00:03:49.840 LINK startup 00:03:49.840 CC test/app/stub/stub.o 00:03:50.098 CXX test/cpp_headers/env.o 00:03:50.098 LINK dif 00:03:50.099 CC test/nvme/reserve/reserve.o 00:03:50.099 CC test/nvme/simple_copy/simple_copy.o 00:03:50.099 CXX test/cpp_headers/event.o 00:03:50.099 CC app/vhost/vhost.o 00:03:50.099 LINK stub 00:03:50.099 CXX test/cpp_headers/fd_group.o 00:03:50.357 CXX test/cpp_headers/fd.o 00:03:50.357 CC test/event/scheduler/scheduler.o 00:03:50.357 LINK reserve 00:03:50.357 CC examples/nvme/hello_world/hello_world.o 00:03:50.357 CC app/spdk_dd/spdk_dd.o 00:03:50.357 LINK simple_copy 00:03:50.357 LINK vhost 00:03:50.357 CXX test/cpp_headers/file.o 00:03:50.357 CC app/fio/nvme/fio_plugin.o 00:03:50.357 CC examples/nvme/reconnect/reconnect.o 00:03:50.357 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:50.617 LINK scheduler 00:03:50.617 LINK hello_world 00:03:50.617 CXX test/cpp_headers/fsdev.o 00:03:50.617 CC examples/nvme/arbitration/arbitration.o 00:03:50.617 CC test/nvme/connect_stress/connect_stress.o 00:03:50.617 CC test/nvme/boot_partition/boot_partition.o 00:03:50.876 LINK spdk_dd 00:03:50.876 CXX test/cpp_headers/fsdev_module.o 00:03:50.876 LINK boot_partition 00:03:50.876 LINK reconnect 00:03:50.876 CC examples/nvme/hotplug/hotplug.o 00:03:50.876 LINK connect_stress 00:03:50.876 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:50.876 CXX test/cpp_headers/ftl.o 00:03:51.140 LINK arbitration 00:03:51.140 LINK spdk_nvme 00:03:51.140 LINK nvme_manage 00:03:51.140 LINK cmb_copy 00:03:51.140 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:51.140 CC examples/nvme/abort/abort.o 00:03:51.140 CXX test/cpp_headers/gpt_spec.o 00:03:51.140 LINK hotplug 00:03:51.140 CC test/nvme/compliance/nvme_compliance.o 00:03:51.403 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:51.403 CC app/fio/bdev/fio_plugin.o 00:03:51.403 LINK pmr_persistence 00:03:51.403 CXX test/cpp_headers/hexlify.o 00:03:51.403 CXX test/cpp_headers/histogram_data.o 00:03:51.403 CC test/nvme/fused_ordering/fused_ordering.o 00:03:51.403 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.663 CXX test/cpp_headers/idxd.o 00:03:51.663 CC test/blobfs/mkfs/mkfs.o 00:03:51.663 LINK nvme_compliance 00:03:51.663 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:51.663 LINK abort 00:03:51.663 LINK fused_ordering 00:03:51.663 LINK hello_fsdev 00:03:51.663 CC test/nvme/fdp/fdp.o 00:03:51.663 CXX test/cpp_headers/idxd_spec.o 00:03:51.663 LINK hello_bdev 00:03:51.923 LINK mkfs 00:03:51.923 CXX test/cpp_headers/init.o 00:03:51.923 LINK doorbell_aers 00:03:51.923 CXX test/cpp_headers/ioat.o 00:03:51.923 CXX test/cpp_headers/ioat_spec.o 00:03:51.923 CXX test/cpp_headers/iscsi_spec.o 00:03:51.923 CXX test/cpp_headers/json.o 00:03:51.923 CXX test/cpp_headers/jsonrpc.o 00:03:51.923 CXX test/cpp_headers/keyring.o 00:03:51.923 CC examples/bdev/bdevperf/bdevperf.o 00:03:51.923 CXX test/cpp_headers/keyring_module.o 00:03:52.181 CXX test/cpp_headers/likely.o 00:03:52.181 CXX test/cpp_headers/log.o 00:03:52.181 CXX test/cpp_headers/lvol.o 00:03:52.181 LINK fdp 00:03:52.181 LINK spdk_bdev 00:03:52.181 CXX test/cpp_headers/md5.o 00:03:52.181 CXX test/cpp_headers/memory.o 00:03:52.181 CXX test/cpp_headers/mmio.o 00:03:52.181 CXX test/cpp_headers/nbd.o 00:03:52.181 CXX test/cpp_headers/net.o 00:03:52.440 CC test/nvme/cuse/cuse.o 00:03:52.440 CXX test/cpp_headers/notify.o 00:03:52.440 CXX test/cpp_headers/nvme.o 00:03:52.440 CXX test/cpp_headers/nvme_intel.o 00:03:52.440 CXX test/cpp_headers/nvme_ocssd.o 00:03:52.440 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.440 CXX test/cpp_headers/nvme_spec.o 00:03:52.440 CXX test/cpp_headers/nvme_zns.o 00:03:52.440 CC test/lvol/esnap/esnap.o 00:03:52.440 CC test/bdev/bdevio/bdevio.o 00:03:52.700 CXX test/cpp_headers/nvmf_cmd.o 00:03:52.700 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:52.700 CXX test/cpp_headers/nvmf.o 00:03:52.700 CXX test/cpp_headers/nvmf_spec.o 00:03:52.700 CXX test/cpp_headers/nvmf_transport.o 00:03:52.700 CXX test/cpp_headers/opal.o 00:03:52.700 CXX test/cpp_headers/opal_spec.o 00:03:52.959 CXX test/cpp_headers/pci_ids.o 00:03:52.959 CXX test/cpp_headers/pipe.o 00:03:52.959 CXX test/cpp_headers/queue.o 00:03:52.959 CXX test/cpp_headers/reduce.o 00:03:52.959 CXX test/cpp_headers/rpc.o 00:03:52.959 CXX test/cpp_headers/scheduler.o 00:03:52.959 CXX test/cpp_headers/scsi.o 00:03:52.959 CXX test/cpp_headers/scsi_spec.o 00:03:52.959 LINK bdevio 00:03:52.959 CXX test/cpp_headers/sock.o 00:03:53.218 CXX test/cpp_headers/stdinc.o 00:03:53.218 CXX test/cpp_headers/string.o 00:03:53.218 CXX test/cpp_headers/thread.o 00:03:53.218 CXX test/cpp_headers/trace.o 00:03:53.218 CXX test/cpp_headers/trace_parser.o 00:03:53.218 CXX test/cpp_headers/tree.o 00:03:53.218 CXX test/cpp_headers/ublk.o 00:03:53.218 CXX test/cpp_headers/util.o 00:03:53.218 LINK bdevperf 00:03:53.218 CXX test/cpp_headers/uuid.o 00:03:53.218 CXX test/cpp_headers/version.o 00:03:53.218 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.218 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.218 CXX test/cpp_headers/vhost.o 00:03:53.477 CXX test/cpp_headers/vmd.o 00:03:53.477 CXX test/cpp_headers/xor.o 00:03:53.477 CXX test/cpp_headers/zipf.o 00:03:53.736 LINK cuse 00:03:53.995 CC examples/nvmf/nvmf/nvmf.o 00:03:54.254 LINK nvmf 00:03:59.585 LINK esnap 00:03:59.585 00:03:59.585 real 1m32.745s 00:03:59.585 user 8m2.428s 00:03:59.585 sys 1m46.186s 00:03:59.585 08:15:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:59.585 08:15:11 make -- common/autotest_common.sh@10 -- $ set +x 00:03:59.585 ************************************ 00:03:59.585 END TEST make 00:03:59.586 ************************************ 00:03:59.586 08:15:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.586 08:15:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.586 08:15:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.586 08:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 08:15:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.586 08:15:11 -- pm/common@44 -- $ pid=5475 00:03:59.586 08:15:11 -- pm/common@50 -- $ kill -TERM 5475 00:03:59.586 08:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 08:15:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.586 08:15:11 -- pm/common@44 -- $ pid=5477 00:03:59.586 08:15:11 -- pm/common@50 -- $ kill -TERM 5477 00:03:59.586 08:15:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:59.586 08:15:11 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:59.586 08:15:11 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:59.586 08:15:11 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:59.586 08:15:11 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:59.586 08:15:11 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:59.586 08:15:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.586 08:15:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.586 08:15:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.586 08:15:11 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.586 08:15:11 -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.586 08:15:11 -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.586 08:15:11 -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.586 08:15:11 -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.586 08:15:11 -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.586 08:15:11 -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.586 08:15:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.586 08:15:11 -- scripts/common.sh@344 -- # case "$op" in 00:03:59.586 08:15:11 -- scripts/common.sh@345 -- # : 1 00:03:59.586 08:15:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.586 08:15:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.586 08:15:11 -- scripts/common.sh@365 -- # decimal 1 00:03:59.586 08:15:11 -- scripts/common.sh@353 -- # local d=1 00:03:59.586 08:15:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.586 08:15:11 -- scripts/common.sh@355 -- # echo 1 00:03:59.586 08:15:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.586 08:15:11 -- scripts/common.sh@366 -- # decimal 2 00:03:59.586 08:15:11 -- scripts/common.sh@353 -- # local d=2 00:03:59.586 08:15:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.586 08:15:11 -- scripts/common.sh@355 -- # echo 2 00:03:59.586 08:15:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.586 08:15:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.586 08:15:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.586 08:15:11 -- scripts/common.sh@368 -- # return 0 00:03:59.586 08:15:11 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.586 08:15:11 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.586 --rc genhtml_branch_coverage=1 00:03:59.586 --rc genhtml_function_coverage=1 00:03:59.586 --rc genhtml_legend=1 00:03:59.586 --rc geninfo_all_blocks=1 00:03:59.586 --rc geninfo_unexecuted_blocks=1 00:03:59.586 00:03:59.586 ' 00:03:59.586 08:15:11 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.586 --rc genhtml_branch_coverage=1 00:03:59.586 --rc genhtml_function_coverage=1 00:03:59.586 --rc genhtml_legend=1 00:03:59.586 --rc geninfo_all_blocks=1 00:03:59.586 --rc geninfo_unexecuted_blocks=1 00:03:59.586 00:03:59.586 ' 00:03:59.586 08:15:11 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.586 --rc genhtml_branch_coverage=1 00:03:59.586 --rc genhtml_function_coverage=1 00:03:59.586 --rc genhtml_legend=1 00:03:59.586 --rc geninfo_all_blocks=1 00:03:59.586 --rc geninfo_unexecuted_blocks=1 00:03:59.586 00:03:59.586 ' 00:03:59.586 08:15:11 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.586 --rc genhtml_branch_coverage=1 00:03:59.586 --rc genhtml_function_coverage=1 00:03:59.586 --rc genhtml_legend=1 00:03:59.586 --rc geninfo_all_blocks=1 00:03:59.586 --rc geninfo_unexecuted_blocks=1 00:03:59.586 00:03:59.586 ' 00:03:59.586 08:15:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.586 08:15:11 -- nvmf/common.sh@7 -- # uname -s 00:03:59.586 08:15:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.586 08:15:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.586 08:15:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.586 08:15:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.586 08:15:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.586 08:15:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.586 08:15:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.586 08:15:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.586 08:15:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.586 08:15:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.586 08:15:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:194d8c4c-be9a-4294-b99e-c6d77342eeb3 00:03:59.586 08:15:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=194d8c4c-be9a-4294-b99e-c6d77342eeb3 00:03:59.586 08:15:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.586 08:15:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.586 08:15:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.586 08:15:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.586 08:15:11 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.586 08:15:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:59.586 08:15:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.586 08:15:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.586 08:15:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.586 08:15:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 08:15:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 08:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 08:15:11 -- paths/export.sh@5 -- # export PATH 00:03:59.586 08:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.586 08:15:11 -- nvmf/common.sh@51 -- # : 0 00:03:59.586 08:15:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:59.586 08:15:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:59.586 08:15:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.586 08:15:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.586 08:15:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.586 08:15:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:59.586 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:59.586 08:15:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:59.586 08:15:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:59.586 08:15:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:59.586 08:15:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.586 08:15:11 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.586 08:15:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.586 08:15:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.586 08:15:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.586 08:15:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.586 08:15:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.586 08:15:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.586 08:15:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.586 08:15:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.586 08:15:11 -- spdk/autotest.sh@48 -- # udevadm_pid=54528 00:03:59.586 08:15:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.586 08:15:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.586 08:15:11 -- pm/common@17 -- # local monitor 00:03:59.586 08:15:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 08:15:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.586 08:15:11 -- pm/common@25 -- # sleep 1 00:03:59.586 08:15:11 -- pm/common@21 -- # date +%s 00:03:59.586 08:15:11 -- pm/common@21 -- # date +%s 00:03:59.587 08:15:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734077711 00:03:59.587 08:15:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734077711 00:03:59.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734077711_collect-cpu-load.pm.log 00:03:59.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734077711_collect-vmstat.pm.log 00:04:00.521 08:15:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.521 08:15:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.521 08:15:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.521 08:15:12 -- common/autotest_common.sh@10 -- # set +x 00:04:00.521 08:15:12 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.521 08:15:12 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:00.521 08:15:12 -- common/autotest_common.sh@10 -- # set +x 00:04:00.781 08:15:12 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:00.781 08:15:12 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:00.781 08:15:12 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:00.781 08:15:12 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:00.781 08:15:12 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:00.781 08:15:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.781 08:15:12 -- common/autotest_common.sh@1457 -- # uname 00:04:00.781 08:15:12 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:00.781 08:15:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.781 08:15:12 -- common/autotest_common.sh@1477 -- # uname 00:04:00.781 08:15:12 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:00.781 08:15:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:00.781 08:15:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:00.781 lcov: LCOV version 1.15 00:04:00.781 08:15:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:15.677 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:15.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:33.776 08:15:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:33.776 08:15:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.776 08:15:44 -- common/autotest_common.sh@10 -- # set +x 00:04:33.776 08:15:44 -- spdk/autotest.sh@78 -- # rm -f 00:04:33.776 08:15:44 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.776 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:33.776 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:33.776 08:15:45 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:33.776 08:15:45 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:33.776 08:15:45 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:33.776 08:15:45 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:33.776 08:15:45 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:33.776 08:15:45 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:33.776 08:15:45 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:33.776 08:15:45 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:33.776 08:15:45 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:33.776 08:15:45 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:33.776 08:15:45 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:33.776 08:15:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.776 08:15:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:33.776 08:15:45 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:33.776 08:15:45 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:33.776 08:15:45 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:33.776 08:15:45 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:33.776 08:15:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:33.776 08:15:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:33.776 08:15:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:33.776 08:15:45 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:33.776 08:15:45 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:33.776 08:15:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:33.776 08:15:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:33.776 08:15:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:33.776 08:15:45 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:33.776 08:15:45 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:33.776 08:15:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:33.776 08:15:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:33.776 08:15:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:33.776 08:15:45 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:33.776 08:15:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:33.776 08:15:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:33.776 08:15:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:33.776 08:15:45 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:33.776 08:15:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:33.776 No valid GPT data, bailing 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # pt= 00:04:33.776 08:15:45 -- scripts/common.sh@395 -- # return 1 00:04:33.776 08:15:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:33.776 1+0 records in 00:04:33.776 1+0 records out 00:04:33.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571841 s, 183 MB/s 00:04:33.776 08:15:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:33.776 08:15:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:33.776 08:15:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:33.776 08:15:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:33.776 08:15:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:33.776 No valid GPT data, bailing 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # pt= 00:04:33.776 08:15:45 -- scripts/common.sh@395 -- # return 1 00:04:33.776 08:15:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:33.776 1+0 records in 00:04:33.776 1+0 records out 00:04:33.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605808 s, 173 MB/s 00:04:33.776 08:15:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:33.776 08:15:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:33.776 08:15:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:33.776 08:15:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:33.776 08:15:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:33.776 No valid GPT data, bailing 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # pt= 00:04:33.776 08:15:45 -- scripts/common.sh@395 -- # return 1 00:04:33.776 08:15:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:33.776 1+0 records in 00:04:33.776 1+0 records out 00:04:33.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593854 s, 177 MB/s 00:04:33.776 08:15:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:33.776 08:15:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:33.776 08:15:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:33.776 08:15:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:33.776 08:15:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:33.776 No valid GPT data, bailing 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:33.776 08:15:45 -- scripts/common.sh@394 -- # pt= 00:04:33.776 08:15:45 -- scripts/common.sh@395 -- # return 1 00:04:33.776 08:15:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:33.776 1+0 records in 00:04:33.776 1+0 records out 00:04:33.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577838 s, 181 MB/s 00:04:33.776 08:15:45 -- spdk/autotest.sh@105 -- # sync 00:04:33.776 08:15:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:33.776 08:15:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:33.776 08:15:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.114 08:15:48 -- spdk/autotest.sh@111 -- # uname -s 00:04:37.114 08:15:48 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:37.114 08:15:48 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:37.114 08:15:48 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:37.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.389 Hugepages 00:04:37.389 node hugesize free / total 00:04:37.389 node0 1048576kB 0 / 0 00:04:37.389 node0 2048kB 0 / 0 00:04:37.389 00:04:37.389 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.389 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:37.648 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:37.648 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:37.648 08:15:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:37.648 08:15:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:37.648 08:15:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:37.648 08:15:49 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.586 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.586 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.586 08:15:50 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:39.525 08:15:51 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:39.525 08:15:51 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:39.525 08:15:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:39.525 08:15:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:39.525 08:15:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:39.525 08:15:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:39.525 08:15:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.525 08:15:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:39.525 08:15:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:39.784 08:15:51 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:39.784 08:15:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:39.784 08:15:51 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.353 Waiting for block devices as requested 00:04:40.353 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.353 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.613 08:15:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:40.613 08:15:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:40.613 08:15:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:40.613 08:15:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:40.613 08:15:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:40.613 08:15:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1543 -- # continue 00:04:40.613 08:15:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:40.613 08:15:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.613 08:15:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:40.613 08:15:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:40.613 08:15:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:40.613 08:15:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:40.613 08:15:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:40.613 08:15:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:40.613 08:15:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:40.613 08:15:52 -- common/autotest_common.sh@1543 -- # continue 00:04:40.613 08:15:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:40.613 08:15:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.613 08:15:52 -- common/autotest_common.sh@10 -- # set +x 00:04:40.613 08:15:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:40.613 08:15:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.614 08:15:52 -- common/autotest_common.sh@10 -- # set +x 00:04:40.614 08:15:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.552 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.552 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.810 08:15:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:41.810 08:15:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.810 08:15:53 -- common/autotest_common.sh@10 -- # set +x 00:04:41.810 08:15:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:41.810 08:15:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:41.810 08:15:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.810 08:15:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:41.810 08:15:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:41.810 08:15:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:41.810 08:15:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:41.810 08:15:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:41.810 08:15:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.810 08:15:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.810 08:15:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.810 08:15:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.810 08:15:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.810 08:15:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:41.810 08:15:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:41.810 08:15:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:41.810 08:15:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:41.810 08:15:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:41.810 08:15:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.810 08:15:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:41.810 08:15:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:41.810 08:15:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:41.810 08:15:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.810 08:15:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:41.810 08:15:54 -- common/autotest_common.sh@1572 -- # return 0 00:04:41.810 08:15:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:41.810 08:15:54 -- common/autotest_common.sh@1580 -- # return 0 00:04:41.810 08:15:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:41.810 08:15:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:41.810 08:15:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.810 08:15:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.810 08:15:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:41.810 08:15:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.810 08:15:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.810 08:15:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:41.810 08:15:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.810 08:15:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.810 08:15:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.810 08:15:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.810 ************************************ 00:04:41.810 START TEST env 00:04:41.810 ************************************ 00:04:41.810 08:15:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:42.070 * Looking for test storage... 00:04:42.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.070 08:15:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.070 08:15:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.070 08:15:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.070 08:15:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.070 08:15:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.070 08:15:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.070 08:15:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.070 08:15:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.070 08:15:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.070 08:15:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.070 08:15:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.070 08:15:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:42.070 08:15:54 env -- scripts/common.sh@345 -- # : 1 00:04:42.070 08:15:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.070 08:15:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.070 08:15:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:42.070 08:15:54 env -- scripts/common.sh@353 -- # local d=1 00:04:42.070 08:15:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.070 08:15:54 env -- scripts/common.sh@355 -- # echo 1 00:04:42.070 08:15:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.070 08:15:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:42.070 08:15:54 env -- scripts/common.sh@353 -- # local d=2 00:04:42.070 08:15:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.070 08:15:54 env -- scripts/common.sh@355 -- # echo 2 00:04:42.070 08:15:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.070 08:15:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.070 08:15:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.070 08:15:54 env -- scripts/common.sh@368 -- # return 0 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.070 --rc genhtml_branch_coverage=1 00:04:42.070 --rc genhtml_function_coverage=1 00:04:42.070 --rc genhtml_legend=1 00:04:42.070 --rc geninfo_all_blocks=1 00:04:42.070 --rc geninfo_unexecuted_blocks=1 00:04:42.070 00:04:42.070 ' 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.070 --rc genhtml_branch_coverage=1 00:04:42.070 --rc genhtml_function_coverage=1 00:04:42.070 --rc genhtml_legend=1 00:04:42.070 --rc geninfo_all_blocks=1 00:04:42.070 --rc geninfo_unexecuted_blocks=1 00:04:42.070 00:04:42.070 ' 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.070 --rc genhtml_branch_coverage=1 00:04:42.070 --rc genhtml_function_coverage=1 00:04:42.070 --rc genhtml_legend=1 00:04:42.070 --rc geninfo_all_blocks=1 00:04:42.070 --rc geninfo_unexecuted_blocks=1 00:04:42.070 00:04:42.070 ' 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.070 --rc genhtml_branch_coverage=1 00:04:42.070 --rc genhtml_function_coverage=1 00:04:42.070 --rc genhtml_legend=1 00:04:42.070 --rc geninfo_all_blocks=1 00:04:42.070 --rc geninfo_unexecuted_blocks=1 00:04:42.070 00:04:42.070 ' 00:04:42.070 08:15:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.070 08:15:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.070 08:15:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.070 ************************************ 00:04:42.070 START TEST env_memory 00:04:42.070 ************************************ 00:04:42.070 08:15:54 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:42.070 00:04:42.070 00:04:42.070 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.070 http://cunit.sourceforge.net/ 00:04:42.070 00:04:42.070 00:04:42.070 Suite: memory 00:04:42.070 Test: alloc and free memory map ...[2024-12-13 08:15:54.423911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.332 passed 00:04:42.332 Test: mem map translation ...[2024-12-13 08:15:54.470757] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.332 [2024-12-13 08:15:54.470872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.332 [2024-12-13 08:15:54.470963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.332 [2024-12-13 08:15:54.470991] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.332 passed 00:04:42.332 Test: mem map registration ...[2024-12-13 08:15:54.564675] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:42.332 [2024-12-13 08:15:54.564758] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:42.332 passed 00:04:42.332 Test: mem map adjacent registrations ...passed 00:04:42.332 00:04:42.332 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.332 suites 1 1 n/a 0 0 00:04:42.332 tests 4 4 4 0 0 00:04:42.332 asserts 152 152 152 0 n/a 00:04:42.332 00:04:42.332 Elapsed time = 0.278 seconds 00:04:42.332 00:04:42.332 real 0m0.334s 00:04:42.332 user 0m0.289s 00:04:42.332 sys 0m0.032s 00:04:42.332 08:15:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.332 08:15:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.332 ************************************ 00:04:42.332 END TEST env_memory 00:04:42.332 ************************************ 00:04:42.592 08:15:54 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.592 08:15:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.592 08:15:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.592 08:15:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.592 ************************************ 00:04:42.592 START TEST env_vtophys 00:04:42.592 ************************************ 00:04:42.592 08:15:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.592 EAL: lib.eal log level changed from notice to debug 00:04:42.592 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 1 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 2 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 3 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 4 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 5 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 6 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 7 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 8 as core 0 on socket 0 00:04:42.592 EAL: Detected lcore 9 as core 0 on socket 0 00:04:42.592 EAL: Maximum logical cores by configuration: 128 00:04:42.592 EAL: Detected CPU lcores: 10 00:04:42.592 EAL: Detected NUMA nodes: 1 00:04:42.592 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:42.592 EAL: Detected shared linkage of DPDK 00:04:42.592 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.592 EAL: Selected IOVA mode 'PA' 00:04:42.592 EAL: Probing VFIO support... 00:04:42.592 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.592 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:42.592 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.592 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.592 EAL: Setting up physically contiguous memory... 00:04:42.592 EAL: Setting maximum number of open files to 524288 00:04:42.592 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.592 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.592 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.592 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.592 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.592 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.592 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.592 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.592 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.592 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.592 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.592 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.592 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.592 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.592 EAL: Hugepages will be freed exactly as allocated. 00:04:42.592 EAL: No shared files mode enabled, IPC is disabled 00:04:42.592 EAL: No shared files mode enabled, IPC is disabled 00:04:42.592 EAL: TSC frequency is ~2290000 KHz 00:04:42.592 EAL: Main lcore 0 is ready (tid=7f4f4e936a40;cpuset=[0]) 00:04:42.592 EAL: Trying to obtain current memory policy. 00:04:42.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.592 EAL: Restoring previous memory policy: 0 00:04:42.592 EAL: request: mp_malloc_sync 00:04:42.592 EAL: No shared files mode enabled, IPC is disabled 00:04:42.592 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.592 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.852 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.852 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.852 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:42.852 00:04:42.852 00:04:42.852 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.852 http://cunit.sourceforge.net/ 00:04:42.852 00:04:42.852 00:04:42.852 Suite: components_suite 00:04:43.111 Test: vtophys_malloc_test ...passed 00:04:43.111 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.111 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.111 EAL: Restoring previous memory policy: 4 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.111 EAL: Trying to obtain current memory policy. 00:04:43.111 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.111 EAL: Restoring previous memory policy: 4 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.111 EAL: Trying to obtain current memory policy. 00:04:43.111 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.111 EAL: Restoring previous memory policy: 4 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.111 EAL: Trying to obtain current memory policy. 00:04:43.111 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.111 EAL: Restoring previous memory policy: 4 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.111 EAL: request: mp_malloc_sync 00:04:43.111 EAL: No shared files mode enabled, IPC is disabled 00:04:43.111 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.371 EAL: Trying to obtain current memory policy. 00:04:43.371 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.371 EAL: Restoring previous memory policy: 4 00:04:43.371 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.371 EAL: request: mp_malloc_sync 00:04:43.371 EAL: No shared files mode enabled, IPC is disabled 00:04:43.371 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.371 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.371 EAL: request: mp_malloc_sync 00:04:43.371 EAL: No shared files mode enabled, IPC is disabled 00:04:43.371 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.371 EAL: Trying to obtain current memory policy. 00:04:43.371 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.371 EAL: Restoring previous memory policy: 4 00:04:43.371 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.371 EAL: request: mp_malloc_sync 00:04:43.371 EAL: No shared files mode enabled, IPC is disabled 00:04:43.371 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.630 EAL: request: mp_malloc_sync 00:04:43.630 EAL: No shared files mode enabled, IPC is disabled 00:04:43.630 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.630 EAL: Trying to obtain current memory policy. 00:04:43.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.630 EAL: Restoring previous memory policy: 4 00:04:43.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.630 EAL: request: mp_malloc_sync 00:04:43.630 EAL: No shared files mode enabled, IPC is disabled 00:04:43.630 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.888 EAL: request: mp_malloc_sync 00:04:43.888 EAL: No shared files mode enabled, IPC is disabled 00:04:43.888 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.147 EAL: Trying to obtain current memory policy. 00:04:44.147 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.147 EAL: Restoring previous memory policy: 4 00:04:44.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.147 EAL: request: mp_malloc_sync 00:04:44.147 EAL: No shared files mode enabled, IPC is disabled 00:04:44.147 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.715 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.715 EAL: request: mp_malloc_sync 00:04:44.715 EAL: No shared files mode enabled, IPC is disabled 00:04:44.715 EAL: Heap on socket 0 was shrunk by 258MB 00:04:45.314 EAL: Trying to obtain current memory policy. 00:04:45.314 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.314 EAL: Restoring previous memory policy: 4 00:04:45.314 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.314 EAL: request: mp_malloc_sync 00:04:45.314 EAL: No shared files mode enabled, IPC is disabled 00:04:45.314 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.507 EAL: request: mp_malloc_sync 00:04:46.507 EAL: No shared files mode enabled, IPC is disabled 00:04:46.507 EAL: Heap on socket 0 was shrunk by 514MB 00:04:47.446 EAL: Trying to obtain current memory policy. 00:04:47.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.446 EAL: Restoring previous memory policy: 4 00:04:47.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.446 EAL: request: mp_malloc_sync 00:04:47.446 EAL: No shared files mode enabled, IPC is disabled 00:04:47.446 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.352 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.352 EAL: request: mp_malloc_sync 00:04:49.352 EAL: No shared files mode enabled, IPC is disabled 00:04:49.352 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:51.258 passed 00:04:51.258 00:04:51.258 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.258 suites 1 1 n/a 0 0 00:04:51.258 tests 2 2 2 0 0 00:04:51.258 asserts 5502 5502 5502 0 n/a 00:04:51.258 00:04:51.258 Elapsed time = 8.256 seconds 00:04:51.258 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.258 EAL: request: mp_malloc_sync 00:04:51.258 EAL: No shared files mode enabled, IPC is disabled 00:04:51.258 EAL: Heap on socket 0 was shrunk by 2MB 00:04:51.258 EAL: No shared files mode enabled, IPC is disabled 00:04:51.258 EAL: No shared files mode enabled, IPC is disabled 00:04:51.258 EAL: No shared files mode enabled, IPC is disabled 00:04:51.258 00:04:51.258 real 0m8.599s 00:04:51.258 user 0m7.600s 00:04:51.258 sys 0m0.843s 00:04:51.258 08:16:03 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.258 08:16:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:51.258 ************************************ 00:04:51.258 END TEST env_vtophys 00:04:51.258 ************************************ 00:04:51.258 08:16:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:51.258 08:16:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.258 08:16:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.258 08:16:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.258 ************************************ 00:04:51.258 START TEST env_pci 00:04:51.258 ************************************ 00:04:51.258 08:16:03 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:51.258 00:04:51.258 00:04:51.258 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.258 http://cunit.sourceforge.net/ 00:04:51.258 00:04:51.258 00:04:51.258 Suite: pci 00:04:51.258 Test: pci_hook ...[2024-12-13 08:16:03.450412] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56875 has claimed it 00:04:51.258 passed 00:04:51.258 00:04:51.258 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.258 suites 1 1 n/a 0 0 00:04:51.258 tests 1 1 1 0 0 00:04:51.258 asserts 25 25 25 0 n/a 00:04:51.258 00:04:51.258 Elapsed time = 0.007 seconds 00:04:51.258 EAL: Cannot find device (10000:00:01.0) 00:04:51.258 EAL: Failed to attach device on primary process 00:04:51.258 00:04:51.258 real 0m0.103s 00:04:51.258 user 0m0.049s 00:04:51.258 sys 0m0.053s 00:04:51.258 08:16:03 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.258 08:16:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:51.258 ************************************ 00:04:51.258 END TEST env_pci 00:04:51.258 ************************************ 00:04:51.258 08:16:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:51.258 08:16:03 env -- env/env.sh@15 -- # uname 00:04:51.258 08:16:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:51.258 08:16:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:51.258 08:16:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.258 08:16:03 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:51.258 08:16:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.258 08:16:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.258 ************************************ 00:04:51.258 START TEST env_dpdk_post_init 00:04:51.258 ************************************ 00:04:51.258 08:16:03 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.517 EAL: Detected CPU lcores: 10 00:04:51.517 EAL: Detected NUMA nodes: 1 00:04:51.517 EAL: Detected shared linkage of DPDK 00:04:51.517 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.517 EAL: Selected IOVA mode 'PA' 00:04:51.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:51.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:51.517 Starting DPDK initialization... 00:04:51.517 Starting SPDK post initialization... 00:04:51.517 SPDK NVMe probe 00:04:51.517 Attaching to 0000:00:10.0 00:04:51.517 Attaching to 0000:00:11.0 00:04:51.517 Attached to 0000:00:10.0 00:04:51.517 Attached to 0000:00:11.0 00:04:51.517 Cleaning up... 00:04:51.517 00:04:51.517 real 0m0.279s 00:04:51.517 user 0m0.092s 00:04:51.517 sys 0m0.088s 00:04:51.517 08:16:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.517 08:16:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.517 ************************************ 00:04:51.517 END TEST env_dpdk_post_init 00:04:51.517 ************************************ 00:04:51.776 08:16:03 env -- env/env.sh@26 -- # uname 00:04:51.776 08:16:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.776 08:16:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.776 08:16:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.776 08:16:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.776 08:16:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.776 ************************************ 00:04:51.776 START TEST env_mem_callbacks 00:04:51.776 ************************************ 00:04:51.776 08:16:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.776 EAL: Detected CPU lcores: 10 00:04:51.776 EAL: Detected NUMA nodes: 1 00:04:51.776 EAL: Detected shared linkage of DPDK 00:04:51.776 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.776 EAL: Selected IOVA mode 'PA' 00:04:51.776 00:04:51.776 00:04:51.776 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.776 http://cunit.sourceforge.net/ 00:04:51.776 00:04:51.776 00:04:51.776 Suite: memory 00:04:51.776 Test: test ... 00:04:51.776 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.776 register 0x200000200000 2097152 00:04:51.776 malloc 3145728 00:04:51.776 register 0x200000400000 4194304 00:04:51.776 buf 0x2000004fffc0 len 3145728 PASSED 00:04:51.776 malloc 64 00:04:51.776 buf 0x2000004ffec0 len 64 PASSED 00:04:51.776 malloc 4194304 00:04:51.776 register 0x200000800000 6291456 00:04:51.776 buf 0x2000009fffc0 len 4194304 PASSED 00:04:51.776 free 0x2000004fffc0 3145728 00:04:52.036 free 0x2000004ffec0 64 00:04:52.036 unregister 0x200000400000 4194304 PASSED 00:04:52.036 free 0x2000009fffc0 4194304 00:04:52.036 unregister 0x200000800000 6291456 PASSED 00:04:52.036 malloc 8388608 00:04:52.036 register 0x200000400000 10485760 00:04:52.036 buf 0x2000005fffc0 len 8388608 PASSED 00:04:52.036 free 0x2000005fffc0 8388608 00:04:52.036 unregister 0x200000400000 10485760 PASSED 00:04:52.036 passed 00:04:52.036 00:04:52.036 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.036 suites 1 1 n/a 0 0 00:04:52.036 tests 1 1 1 0 0 00:04:52.036 asserts 15 15 15 0 n/a 00:04:52.036 00:04:52.036 Elapsed time = 0.085 seconds 00:04:52.036 00:04:52.036 real 0m0.284s 00:04:52.036 user 0m0.111s 00:04:52.036 sys 0m0.071s 00:04:52.036 08:16:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.036 08:16:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:52.036 ************************************ 00:04:52.036 END TEST env_mem_callbacks 00:04:52.036 ************************************ 00:04:52.036 ************************************ 00:04:52.036 END TEST env 00:04:52.036 ************************************ 00:04:52.036 00:04:52.036 real 0m10.150s 00:04:52.036 user 0m8.370s 00:04:52.036 sys 0m1.427s 00:04:52.036 08:16:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.036 08:16:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.036 08:16:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:52.036 08:16:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.036 08:16:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.036 08:16:04 -- common/autotest_common.sh@10 -- # set +x 00:04:52.036 ************************************ 00:04:52.036 START TEST rpc 00:04:52.036 ************************************ 00:04:52.036 08:16:04 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:52.296 * Looking for test storage... 00:04:52.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.296 08:16:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.296 08:16:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.296 08:16:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.296 08:16:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.296 08:16:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.296 08:16:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.296 08:16:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.296 08:16:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.296 08:16:04 rpc -- scripts/common.sh@345 -- # : 1 00:04:52.296 08:16:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.296 08:16:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.296 08:16:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.296 08:16:04 rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.296 08:16:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.296 08:16:04 rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.296 08:16:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.296 08:16:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.296 08:16:04 rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.296 08:16:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.296 08:16:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.296 08:16:04 rpc -- scripts/common.sh@368 -- # return 0 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.296 --rc genhtml_branch_coverage=1 00:04:52.296 --rc genhtml_function_coverage=1 00:04:52.296 --rc genhtml_legend=1 00:04:52.296 --rc geninfo_all_blocks=1 00:04:52.296 --rc geninfo_unexecuted_blocks=1 00:04:52.296 00:04:52.296 ' 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.296 --rc genhtml_branch_coverage=1 00:04:52.296 --rc genhtml_function_coverage=1 00:04:52.296 --rc genhtml_legend=1 00:04:52.296 --rc geninfo_all_blocks=1 00:04:52.296 --rc geninfo_unexecuted_blocks=1 00:04:52.296 00:04:52.296 ' 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.296 --rc genhtml_branch_coverage=1 00:04:52.296 --rc genhtml_function_coverage=1 00:04:52.296 --rc genhtml_legend=1 00:04:52.296 --rc geninfo_all_blocks=1 00:04:52.296 --rc geninfo_unexecuted_blocks=1 00:04:52.296 00:04:52.296 ' 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.296 --rc genhtml_branch_coverage=1 00:04:52.296 --rc genhtml_function_coverage=1 00:04:52.296 --rc genhtml_legend=1 00:04:52.296 --rc geninfo_all_blocks=1 00:04:52.296 --rc geninfo_unexecuted_blocks=1 00:04:52.296 00:04:52.296 ' 00:04:52.296 08:16:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57002 00:04:52.296 08:16:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:52.296 08:16:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.296 08:16:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57002 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 57002 ']' 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.296 08:16:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.296 [2024-12-13 08:16:04.655034] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:52.296 [2024-12-13 08:16:04.655267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57002 ] 00:04:52.555 [2024-12-13 08:16:04.829572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.814 [2024-12-13 08:16:04.941010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:52.815 [2024-12-13 08:16:04.941162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57002' to capture a snapshot of events at runtime. 00:04:52.815 [2024-12-13 08:16:04.941205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:52.815 [2024-12-13 08:16:04.941246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:52.815 [2024-12-13 08:16:04.941266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57002 for offline analysis/debug. 00:04:52.815 [2024-12-13 08:16:04.942472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.754 08:16:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.754 08:16:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:53.754 08:16:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.754 08:16:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.754 08:16:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:53.754 08:16:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:53.754 08:16:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.754 08:16:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.754 08:16:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.754 ************************************ 00:04:53.754 START TEST rpc_integrity 00:04:53.754 ************************************ 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.754 08:16:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.754 { 00:04:53.754 "name": "Malloc0", 00:04:53.754 "aliases": [ 00:04:53.754 "7b773875-2b69-438a-9fdb-253b8d194c46" 00:04:53.754 ], 00:04:53.754 "product_name": "Malloc disk", 00:04:53.754 "block_size": 512, 00:04:53.754 "num_blocks": 16384, 00:04:53.754 "uuid": "7b773875-2b69-438a-9fdb-253b8d194c46", 00:04:53.754 "assigned_rate_limits": { 00:04:53.754 "rw_ios_per_sec": 0, 00:04:53.754 "rw_mbytes_per_sec": 0, 00:04:53.754 "r_mbytes_per_sec": 0, 00:04:53.754 "w_mbytes_per_sec": 0 00:04:53.754 }, 00:04:53.754 "claimed": false, 00:04:53.754 "zoned": false, 00:04:53.754 "supported_io_types": { 00:04:53.754 "read": true, 00:04:53.754 "write": true, 00:04:53.754 "unmap": true, 00:04:53.754 "flush": true, 00:04:53.754 "reset": true, 00:04:53.754 "nvme_admin": false, 00:04:53.754 "nvme_io": false, 00:04:53.754 "nvme_io_md": false, 00:04:53.754 "write_zeroes": true, 00:04:53.754 "zcopy": true, 00:04:53.754 "get_zone_info": false, 00:04:53.754 "zone_management": false, 00:04:53.754 "zone_append": false, 00:04:53.754 "compare": false, 00:04:53.754 "compare_and_write": false, 00:04:53.754 "abort": true, 00:04:53.754 "seek_hole": false, 00:04:53.754 "seek_data": false, 00:04:53.754 "copy": true, 00:04:53.754 "nvme_iov_md": false 00:04:53.754 }, 00:04:53.754 "memory_domains": [ 00:04:53.754 { 00:04:53.754 "dma_device_id": "system", 00:04:53.754 "dma_device_type": 1 00:04:53.754 }, 00:04:53.754 { 00:04:53.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.754 "dma_device_type": 2 00:04:53.754 } 00:04:53.754 ], 00:04:53.754 "driver_specific": {} 00:04:53.754 } 00:04:53.754 ]' 00:04:53.754 08:16:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.754 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.754 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:53.754 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.754 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.754 [2024-12-13 08:16:06.023852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:53.754 [2024-12-13 08:16:06.024007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.754 [2024-12-13 08:16:06.024042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:53.754 [2024-12-13 08:16:06.024059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.754 [2024-12-13 08:16:06.026591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.754 [2024-12-13 08:16:06.026637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.754 Passthru0 00:04:53.754 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.754 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.754 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.754 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.754 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.754 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.754 { 00:04:53.754 "name": "Malloc0", 00:04:53.754 "aliases": [ 00:04:53.754 "7b773875-2b69-438a-9fdb-253b8d194c46" 00:04:53.754 ], 00:04:53.754 "product_name": "Malloc disk", 00:04:53.754 "block_size": 512, 00:04:53.754 "num_blocks": 16384, 00:04:53.754 "uuid": "7b773875-2b69-438a-9fdb-253b8d194c46", 00:04:53.754 "assigned_rate_limits": { 00:04:53.754 "rw_ios_per_sec": 0, 00:04:53.754 "rw_mbytes_per_sec": 0, 00:04:53.754 "r_mbytes_per_sec": 0, 00:04:53.754 "w_mbytes_per_sec": 0 00:04:53.754 }, 00:04:53.754 "claimed": true, 00:04:53.754 "claim_type": "exclusive_write", 00:04:53.754 "zoned": false, 00:04:53.754 "supported_io_types": { 00:04:53.754 "read": true, 00:04:53.754 "write": true, 00:04:53.754 "unmap": true, 00:04:53.754 "flush": true, 00:04:53.754 "reset": true, 00:04:53.754 "nvme_admin": false, 00:04:53.754 "nvme_io": false, 00:04:53.754 "nvme_io_md": false, 00:04:53.754 "write_zeroes": true, 00:04:53.754 "zcopy": true, 00:04:53.754 "get_zone_info": false, 00:04:53.754 "zone_management": false, 00:04:53.754 "zone_append": false, 00:04:53.754 "compare": false, 00:04:53.754 "compare_and_write": false, 00:04:53.754 "abort": true, 00:04:53.754 "seek_hole": false, 00:04:53.754 "seek_data": false, 00:04:53.754 "copy": true, 00:04:53.754 "nvme_iov_md": false 00:04:53.754 }, 00:04:53.754 "memory_domains": [ 00:04:53.754 { 00:04:53.754 "dma_device_id": "system", 00:04:53.754 "dma_device_type": 1 00:04:53.754 }, 00:04:53.754 { 00:04:53.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.754 "dma_device_type": 2 00:04:53.754 } 00:04:53.754 ], 00:04:53.754 "driver_specific": {} 00:04:53.754 }, 00:04:53.754 { 00:04:53.754 "name": "Passthru0", 00:04:53.755 "aliases": [ 00:04:53.755 "5745fffb-badc-59f0-b315-9b928b3d4cd3" 00:04:53.755 ], 00:04:53.755 "product_name": "passthru", 00:04:53.755 "block_size": 512, 00:04:53.755 "num_blocks": 16384, 00:04:53.755 "uuid": "5745fffb-badc-59f0-b315-9b928b3d4cd3", 00:04:53.755 "assigned_rate_limits": { 00:04:53.755 "rw_ios_per_sec": 0, 00:04:53.755 "rw_mbytes_per_sec": 0, 00:04:53.755 "r_mbytes_per_sec": 0, 00:04:53.755 "w_mbytes_per_sec": 0 00:04:53.755 }, 00:04:53.755 "claimed": false, 00:04:53.755 "zoned": false, 00:04:53.755 "supported_io_types": { 00:04:53.755 "read": true, 00:04:53.755 "write": true, 00:04:53.755 "unmap": true, 00:04:53.755 "flush": true, 00:04:53.755 "reset": true, 00:04:53.755 "nvme_admin": false, 00:04:53.755 "nvme_io": false, 00:04:53.755 "nvme_io_md": false, 00:04:53.755 "write_zeroes": true, 00:04:53.755 "zcopy": true, 00:04:53.755 "get_zone_info": false, 00:04:53.755 "zone_management": false, 00:04:53.755 "zone_append": false, 00:04:53.755 "compare": false, 00:04:53.755 "compare_and_write": false, 00:04:53.755 "abort": true, 00:04:53.755 "seek_hole": false, 00:04:53.755 "seek_data": false, 00:04:53.755 "copy": true, 00:04:53.755 "nvme_iov_md": false 00:04:53.755 }, 00:04:53.755 "memory_domains": [ 00:04:53.755 { 00:04:53.755 "dma_device_id": "system", 00:04:53.755 "dma_device_type": 1 00:04:53.755 }, 00:04:53.755 { 00:04:53.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.755 "dma_device_type": 2 00:04:53.755 } 00:04:53.755 ], 00:04:53.755 "driver_specific": { 00:04:53.755 "passthru": { 00:04:53.755 "name": "Passthru0", 00:04:53.755 "base_bdev_name": "Malloc0" 00:04:53.755 } 00:04:53.755 } 00:04:53.755 } 00:04:53.755 ]' 00:04:53.755 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.755 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.755 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.755 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.755 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.755 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.755 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:53.755 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.755 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.015 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.015 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.015 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.015 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.015 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.015 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.015 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:54.015 ************************************ 00:04:54.015 END TEST rpc_integrity 00:04:54.015 ************************************ 00:04:54.015 08:16:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.015 00:04:54.015 real 0m0.348s 00:04:54.015 user 0m0.191s 00:04:54.015 sys 0m0.050s 00:04:54.015 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.015 08:16:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.015 08:16:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:54.015 08:16:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.015 08:16:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.015 08:16:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.015 ************************************ 00:04:54.015 START TEST rpc_plugins 00:04:54.015 ************************************ 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:54.015 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.015 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:54.015 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.015 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:54.015 { 00:04:54.015 "name": "Malloc1", 00:04:54.015 "aliases": [ 00:04:54.015 "7fc3a571-219b-49f1-9711-710b2c7200a9" 00:04:54.015 ], 00:04:54.015 "product_name": "Malloc disk", 00:04:54.015 "block_size": 4096, 00:04:54.015 "num_blocks": 256, 00:04:54.015 "uuid": "7fc3a571-219b-49f1-9711-710b2c7200a9", 00:04:54.015 "assigned_rate_limits": { 00:04:54.015 "rw_ios_per_sec": 0, 00:04:54.015 "rw_mbytes_per_sec": 0, 00:04:54.015 "r_mbytes_per_sec": 0, 00:04:54.015 "w_mbytes_per_sec": 0 00:04:54.015 }, 00:04:54.015 "claimed": false, 00:04:54.015 "zoned": false, 00:04:54.015 "supported_io_types": { 00:04:54.015 "read": true, 00:04:54.015 "write": true, 00:04:54.015 "unmap": true, 00:04:54.015 "flush": true, 00:04:54.015 "reset": true, 00:04:54.015 "nvme_admin": false, 00:04:54.015 "nvme_io": false, 00:04:54.015 "nvme_io_md": false, 00:04:54.015 "write_zeroes": true, 00:04:54.015 "zcopy": true, 00:04:54.015 "get_zone_info": false, 00:04:54.015 "zone_management": false, 00:04:54.015 "zone_append": false, 00:04:54.015 "compare": false, 00:04:54.015 "compare_and_write": false, 00:04:54.015 "abort": true, 00:04:54.015 "seek_hole": false, 00:04:54.015 "seek_data": false, 00:04:54.015 "copy": true, 00:04:54.015 "nvme_iov_md": false 00:04:54.015 }, 00:04:54.015 "memory_domains": [ 00:04:54.015 { 00:04:54.015 "dma_device_id": "system", 00:04:54.015 "dma_device_type": 1 00:04:54.015 }, 00:04:54.015 { 00:04:54.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.015 "dma_device_type": 2 00:04:54.015 } 00:04:54.015 ], 00:04:54.015 "driver_specific": {} 00:04:54.015 } 00:04:54.015 ]' 00:04:54.015 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:54.015 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:54.015 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.015 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.275 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:54.275 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.275 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.275 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:54.275 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:54.275 ************************************ 00:04:54.275 END TEST rpc_plugins 00:04:54.275 ************************************ 00:04:54.275 08:16:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:54.275 00:04:54.275 real 0m0.180s 00:04:54.275 user 0m0.100s 00:04:54.275 sys 0m0.027s 00:04:54.275 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.275 08:16:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 08:16:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:54.275 08:16:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.275 08:16:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.275 08:16:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 ************************************ 00:04:54.275 START TEST rpc_trace_cmd_test 00:04:54.275 ************************************ 00:04:54.275 08:16:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:54.275 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:54.275 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:54.275 08:16:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.275 08:16:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 08:16:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.275 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:54.275 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57002", 00:04:54.275 "tpoint_group_mask": "0x8", 00:04:54.275 "iscsi_conn": { 00:04:54.275 "mask": "0x2", 00:04:54.275 "tpoint_mask": "0x0" 00:04:54.275 }, 00:04:54.275 "scsi": { 00:04:54.275 "mask": "0x4", 00:04:54.275 "tpoint_mask": "0x0" 00:04:54.275 }, 00:04:54.275 "bdev": { 00:04:54.275 "mask": "0x8", 00:04:54.275 "tpoint_mask": "0xffffffffffffffff" 00:04:54.275 }, 00:04:54.275 "nvmf_rdma": { 00:04:54.275 "mask": "0x10", 00:04:54.275 "tpoint_mask": "0x0" 00:04:54.275 }, 00:04:54.275 "nvmf_tcp": { 00:04:54.275 "mask": "0x20", 00:04:54.275 "tpoint_mask": "0x0" 00:04:54.275 }, 00:04:54.275 "ftl": { 00:04:54.276 "mask": "0x40", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "blobfs": { 00:04:54.276 "mask": "0x80", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "dsa": { 00:04:54.276 "mask": "0x200", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "thread": { 00:04:54.276 "mask": "0x400", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "nvme_pcie": { 00:04:54.276 "mask": "0x800", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "iaa": { 00:04:54.276 "mask": "0x1000", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "nvme_tcp": { 00:04:54.276 "mask": "0x2000", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "bdev_nvme": { 00:04:54.276 "mask": "0x4000", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "sock": { 00:04:54.276 "mask": "0x8000", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "blob": { 00:04:54.276 "mask": "0x10000", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "bdev_raid": { 00:04:54.276 "mask": "0x20000", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 }, 00:04:54.276 "scheduler": { 00:04:54.276 "mask": "0x40000", 00:04:54.276 "tpoint_mask": "0x0" 00:04:54.276 } 00:04:54.276 }' 00:04:54.276 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:54.276 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:54.276 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:54.276 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:54.276 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:54.535 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:54.535 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:54.535 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:54.535 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:54.535 ************************************ 00:04:54.535 END TEST rpc_trace_cmd_test 00:04:54.535 ************************************ 00:04:54.535 08:16:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:54.535 00:04:54.535 real 0m0.232s 00:04:54.535 user 0m0.183s 00:04:54.535 sys 0m0.040s 00:04:54.535 08:16:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.535 08:16:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.535 08:16:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:54.535 08:16:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:54.535 08:16:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:54.535 08:16:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.535 08:16:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.535 08:16:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.535 ************************************ 00:04:54.535 START TEST rpc_daemon_integrity 00:04:54.535 ************************************ 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:54.535 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.536 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.536 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.536 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.795 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.795 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:54.795 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.795 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.795 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.795 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.795 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.796 { 00:04:54.796 "name": "Malloc2", 00:04:54.796 "aliases": [ 00:04:54.796 "96b1f866-5f46-422c-ae7b-82dae87a7abc" 00:04:54.796 ], 00:04:54.796 "product_name": "Malloc disk", 00:04:54.796 "block_size": 512, 00:04:54.796 "num_blocks": 16384, 00:04:54.796 "uuid": "96b1f866-5f46-422c-ae7b-82dae87a7abc", 00:04:54.796 "assigned_rate_limits": { 00:04:54.796 "rw_ios_per_sec": 0, 00:04:54.796 "rw_mbytes_per_sec": 0, 00:04:54.796 "r_mbytes_per_sec": 0, 00:04:54.796 "w_mbytes_per_sec": 0 00:04:54.796 }, 00:04:54.796 "claimed": false, 00:04:54.796 "zoned": false, 00:04:54.796 "supported_io_types": { 00:04:54.796 "read": true, 00:04:54.796 "write": true, 00:04:54.796 "unmap": true, 00:04:54.796 "flush": true, 00:04:54.796 "reset": true, 00:04:54.796 "nvme_admin": false, 00:04:54.796 "nvme_io": false, 00:04:54.796 "nvme_io_md": false, 00:04:54.796 "write_zeroes": true, 00:04:54.796 "zcopy": true, 00:04:54.796 "get_zone_info": false, 00:04:54.796 "zone_management": false, 00:04:54.796 "zone_append": false, 00:04:54.796 "compare": false, 00:04:54.796 "compare_and_write": false, 00:04:54.796 "abort": true, 00:04:54.796 "seek_hole": false, 00:04:54.796 "seek_data": false, 00:04:54.796 "copy": true, 00:04:54.796 "nvme_iov_md": false 00:04:54.796 }, 00:04:54.796 "memory_domains": [ 00:04:54.796 { 00:04:54.796 "dma_device_id": "system", 00:04:54.796 "dma_device_type": 1 00:04:54.796 }, 00:04:54.796 { 00:04:54.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.796 "dma_device_type": 2 00:04:54.796 } 00:04:54.796 ], 00:04:54.796 "driver_specific": {} 00:04:54.796 } 00:04:54.796 ]' 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.796 [2024-12-13 08:16:06.981743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:54.796 [2024-12-13 08:16:06.981815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.796 [2024-12-13 08:16:06.981841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:54.796 [2024-12-13 08:16:06.981852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.796 [2024-12-13 08:16:06.984357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.796 [2024-12-13 08:16:06.984400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.796 Passthru0 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.796 08:16:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.796 { 00:04:54.796 "name": "Malloc2", 00:04:54.796 "aliases": [ 00:04:54.796 "96b1f866-5f46-422c-ae7b-82dae87a7abc" 00:04:54.796 ], 00:04:54.796 "product_name": "Malloc disk", 00:04:54.796 "block_size": 512, 00:04:54.796 "num_blocks": 16384, 00:04:54.796 "uuid": "96b1f866-5f46-422c-ae7b-82dae87a7abc", 00:04:54.796 "assigned_rate_limits": { 00:04:54.796 "rw_ios_per_sec": 0, 00:04:54.796 "rw_mbytes_per_sec": 0, 00:04:54.796 "r_mbytes_per_sec": 0, 00:04:54.796 "w_mbytes_per_sec": 0 00:04:54.796 }, 00:04:54.796 "claimed": true, 00:04:54.796 "claim_type": "exclusive_write", 00:04:54.796 "zoned": false, 00:04:54.796 "supported_io_types": { 00:04:54.796 "read": true, 00:04:54.796 "write": true, 00:04:54.796 "unmap": true, 00:04:54.796 "flush": true, 00:04:54.796 "reset": true, 00:04:54.796 "nvme_admin": false, 00:04:54.796 "nvme_io": false, 00:04:54.796 "nvme_io_md": false, 00:04:54.796 "write_zeroes": true, 00:04:54.796 "zcopy": true, 00:04:54.796 "get_zone_info": false, 00:04:54.796 "zone_management": false, 00:04:54.796 "zone_append": false, 00:04:54.796 "compare": false, 00:04:54.796 "compare_and_write": false, 00:04:54.796 "abort": true, 00:04:54.796 "seek_hole": false, 00:04:54.796 "seek_data": false, 00:04:54.796 "copy": true, 00:04:54.796 "nvme_iov_md": false 00:04:54.796 }, 00:04:54.796 "memory_domains": [ 00:04:54.796 { 00:04:54.796 "dma_device_id": "system", 00:04:54.796 "dma_device_type": 1 00:04:54.796 }, 00:04:54.796 { 00:04:54.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.796 "dma_device_type": 2 00:04:54.796 } 00:04:54.796 ], 00:04:54.796 "driver_specific": {} 00:04:54.796 }, 00:04:54.796 { 00:04:54.796 "name": "Passthru0", 00:04:54.796 "aliases": [ 00:04:54.796 "cdf670ca-d9a3-56e9-b070-3f7346c5cd22" 00:04:54.796 ], 00:04:54.796 "product_name": "passthru", 00:04:54.796 "block_size": 512, 00:04:54.796 "num_blocks": 16384, 00:04:54.796 "uuid": "cdf670ca-d9a3-56e9-b070-3f7346c5cd22", 00:04:54.796 "assigned_rate_limits": { 00:04:54.796 "rw_ios_per_sec": 0, 00:04:54.796 "rw_mbytes_per_sec": 0, 00:04:54.796 "r_mbytes_per_sec": 0, 00:04:54.796 "w_mbytes_per_sec": 0 00:04:54.796 }, 00:04:54.796 "claimed": false, 00:04:54.796 "zoned": false, 00:04:54.796 "supported_io_types": { 00:04:54.796 "read": true, 00:04:54.796 "write": true, 00:04:54.796 "unmap": true, 00:04:54.796 "flush": true, 00:04:54.796 "reset": true, 00:04:54.796 "nvme_admin": false, 00:04:54.796 "nvme_io": false, 00:04:54.796 "nvme_io_md": false, 00:04:54.796 "write_zeroes": true, 00:04:54.796 "zcopy": true, 00:04:54.796 "get_zone_info": false, 00:04:54.796 "zone_management": false, 00:04:54.796 "zone_append": false, 00:04:54.796 "compare": false, 00:04:54.796 "compare_and_write": false, 00:04:54.796 "abort": true, 00:04:54.796 "seek_hole": false, 00:04:54.796 "seek_data": false, 00:04:54.796 "copy": true, 00:04:54.796 "nvme_iov_md": false 00:04:54.796 }, 00:04:54.796 "memory_domains": [ 00:04:54.796 { 00:04:54.796 "dma_device_id": "system", 00:04:54.796 "dma_device_type": 1 00:04:54.796 }, 00:04:54.796 { 00:04:54.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.796 "dma_device_type": 2 00:04:54.796 } 00:04:54.796 ], 00:04:54.796 "driver_specific": { 00:04:54.796 "passthru": { 00:04:54.796 "name": "Passthru0", 00:04:54.796 "base_bdev_name": "Malloc2" 00:04:54.796 } 00:04:54.796 } 00:04:54.796 } 00:04:54.796 ]' 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.796 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:55.056 08:16:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.056 00:04:55.056 real 0m0.375s 00:04:55.056 user 0m0.198s 00:04:55.056 sys 0m0.067s 00:04:55.056 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.056 08:16:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.056 ************************************ 00:04:55.056 END TEST rpc_daemon_integrity 00:04:55.056 ************************************ 00:04:55.056 08:16:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:55.056 08:16:07 rpc -- rpc/rpc.sh@84 -- # killprocess 57002 00:04:55.056 08:16:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 57002 ']' 00:04:55.056 08:16:07 rpc -- common/autotest_common.sh@958 -- # kill -0 57002 00:04:55.056 08:16:07 rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.056 08:16:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.057 08:16:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57002 00:04:55.057 killing process with pid 57002 00:04:55.057 08:16:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.057 08:16:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.057 08:16:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57002' 00:04:55.057 08:16:07 rpc -- common/autotest_common.sh@973 -- # kill 57002 00:04:55.057 08:16:07 rpc -- common/autotest_common.sh@978 -- # wait 57002 00:04:57.590 00:04:57.590 real 0m5.496s 00:04:57.590 user 0m6.030s 00:04:57.590 sys 0m0.922s 00:04:57.590 08:16:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.590 08:16:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.590 ************************************ 00:04:57.590 END TEST rpc 00:04:57.590 ************************************ 00:04:57.590 08:16:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:57.590 08:16:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.590 08:16:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.590 08:16:09 -- common/autotest_common.sh@10 -- # set +x 00:04:57.590 ************************************ 00:04:57.590 START TEST skip_rpc 00:04:57.590 ************************************ 00:04:57.590 08:16:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:57.850 * Looking for test storage... 00:04:57.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.850 08:16:09 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.850 08:16:09 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.850 08:16:09 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.850 08:16:10 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.850 08:16:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.851 --rc genhtml_branch_coverage=1 00:04:57.851 --rc genhtml_function_coverage=1 00:04:57.851 --rc genhtml_legend=1 00:04:57.851 --rc geninfo_all_blocks=1 00:04:57.851 --rc geninfo_unexecuted_blocks=1 00:04:57.851 00:04:57.851 ' 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.851 --rc genhtml_branch_coverage=1 00:04:57.851 --rc genhtml_function_coverage=1 00:04:57.851 --rc genhtml_legend=1 00:04:57.851 --rc geninfo_all_blocks=1 00:04:57.851 --rc geninfo_unexecuted_blocks=1 00:04:57.851 00:04:57.851 ' 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.851 --rc genhtml_branch_coverage=1 00:04:57.851 --rc genhtml_function_coverage=1 00:04:57.851 --rc genhtml_legend=1 00:04:57.851 --rc geninfo_all_blocks=1 00:04:57.851 --rc geninfo_unexecuted_blocks=1 00:04:57.851 00:04:57.851 ' 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.851 --rc genhtml_branch_coverage=1 00:04:57.851 --rc genhtml_function_coverage=1 00:04:57.851 --rc genhtml_legend=1 00:04:57.851 --rc geninfo_all_blocks=1 00:04:57.851 --rc geninfo_unexecuted_blocks=1 00:04:57.851 00:04:57.851 ' 00:04:57.851 08:16:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.851 08:16:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.851 08:16:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.851 08:16:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.851 ************************************ 00:04:57.851 START TEST skip_rpc 00:04:57.851 ************************************ 00:04:57.851 08:16:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:57.851 08:16:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57231 00:04:57.851 08:16:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:57.851 08:16:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.851 08:16:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:58.111 [2024-12-13 08:16:10.221510] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:58.111 [2024-12-13 08:16:10.221720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57231 ] 00:04:58.111 [2024-12-13 08:16:10.386185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.370 [2024-12-13 08:16:10.511277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57231 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57231 ']' 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57231 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57231 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57231' 00:05:03.645 killing process with pid 57231 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57231 00:05:03.645 08:16:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57231 00:05:05.579 00:05:05.579 real 0m7.640s 00:05:05.579 user 0m7.180s 00:05:05.579 sys 0m0.380s 00:05:05.579 08:16:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.579 08:16:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.579 ************************************ 00:05:05.579 END TEST skip_rpc 00:05:05.579 ************************************ 00:05:05.579 08:16:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.579 08:16:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.579 08:16:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.579 08:16:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.579 ************************************ 00:05:05.579 START TEST skip_rpc_with_json 00:05:05.579 ************************************ 00:05:05.579 08:16:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:05.579 08:16:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.579 08:16:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.579 08:16:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57346 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57346 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57346 ']' 00:05:05.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.580 08:16:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.580 [2024-12-13 08:16:17.916248] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:05.580 [2024-12-13 08:16:17.916469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57346 ] 00:05:05.839 [2024-12-13 08:16:18.106029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.098 [2024-12-13 08:16:18.241383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.036 [2024-12-13 08:16:19.198513] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:07.036 request: 00:05:07.036 { 00:05:07.036 "trtype": "tcp", 00:05:07.036 "method": "nvmf_get_transports", 00:05:07.036 "req_id": 1 00:05:07.036 } 00:05:07.036 Got JSON-RPC error response 00:05:07.036 response: 00:05:07.036 { 00:05:07.036 "code": -19, 00:05:07.036 "message": "No such device" 00:05:07.036 } 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.036 [2024-12-13 08:16:19.210640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.036 08:16:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.036 { 00:05:07.036 "subsystems": [ 00:05:07.036 { 00:05:07.036 "subsystem": "fsdev", 00:05:07.036 "config": [ 00:05:07.036 { 00:05:07.036 "method": "fsdev_set_opts", 00:05:07.036 "params": { 00:05:07.036 "fsdev_io_pool_size": 65535, 00:05:07.036 "fsdev_io_cache_size": 256 00:05:07.036 } 00:05:07.036 } 00:05:07.037 ] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "keyring", 00:05:07.037 "config": [] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "iobuf", 00:05:07.037 "config": [ 00:05:07.037 { 00:05:07.037 "method": "iobuf_set_options", 00:05:07.037 "params": { 00:05:07.037 "small_pool_count": 8192, 00:05:07.037 "large_pool_count": 1024, 00:05:07.037 "small_bufsize": 8192, 00:05:07.037 "large_bufsize": 135168, 00:05:07.037 "enable_numa": false 00:05:07.037 } 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "sock", 00:05:07.037 "config": [ 00:05:07.037 { 00:05:07.037 "method": "sock_set_default_impl", 00:05:07.037 "params": { 00:05:07.037 "impl_name": "posix" 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "sock_impl_set_options", 00:05:07.037 "params": { 00:05:07.037 "impl_name": "ssl", 00:05:07.037 "recv_buf_size": 4096, 00:05:07.037 "send_buf_size": 4096, 00:05:07.037 "enable_recv_pipe": true, 00:05:07.037 "enable_quickack": false, 00:05:07.037 "enable_placement_id": 0, 00:05:07.037 "enable_zerocopy_send_server": true, 00:05:07.037 "enable_zerocopy_send_client": false, 00:05:07.037 "zerocopy_threshold": 0, 00:05:07.037 "tls_version": 0, 00:05:07.037 "enable_ktls": false 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "sock_impl_set_options", 00:05:07.037 "params": { 00:05:07.037 "impl_name": "posix", 00:05:07.037 "recv_buf_size": 2097152, 00:05:07.037 "send_buf_size": 2097152, 00:05:07.037 "enable_recv_pipe": true, 00:05:07.037 "enable_quickack": false, 00:05:07.037 "enable_placement_id": 0, 00:05:07.037 "enable_zerocopy_send_server": true, 00:05:07.037 "enable_zerocopy_send_client": false, 00:05:07.037 "zerocopy_threshold": 0, 00:05:07.037 "tls_version": 0, 00:05:07.037 "enable_ktls": false 00:05:07.037 } 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "vmd", 00:05:07.037 "config": [] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "accel", 00:05:07.037 "config": [ 00:05:07.037 { 00:05:07.037 "method": "accel_set_options", 00:05:07.037 "params": { 00:05:07.037 "small_cache_size": 128, 00:05:07.037 "large_cache_size": 16, 00:05:07.037 "task_count": 2048, 00:05:07.037 "sequence_count": 2048, 00:05:07.037 "buf_count": 2048 00:05:07.037 } 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "bdev", 00:05:07.037 "config": [ 00:05:07.037 { 00:05:07.037 "method": "bdev_set_options", 00:05:07.037 "params": { 00:05:07.037 "bdev_io_pool_size": 65535, 00:05:07.037 "bdev_io_cache_size": 256, 00:05:07.037 "bdev_auto_examine": true, 00:05:07.037 "iobuf_small_cache_size": 128, 00:05:07.037 "iobuf_large_cache_size": 16 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "bdev_raid_set_options", 00:05:07.037 "params": { 00:05:07.037 "process_window_size_kb": 1024, 00:05:07.037 "process_max_bandwidth_mb_sec": 0 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "bdev_iscsi_set_options", 00:05:07.037 "params": { 00:05:07.037 "timeout_sec": 30 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "bdev_nvme_set_options", 00:05:07.037 "params": { 00:05:07.037 "action_on_timeout": "none", 00:05:07.037 "timeout_us": 0, 00:05:07.037 "timeout_admin_us": 0, 00:05:07.037 "keep_alive_timeout_ms": 10000, 00:05:07.037 "arbitration_burst": 0, 00:05:07.037 "low_priority_weight": 0, 00:05:07.037 "medium_priority_weight": 0, 00:05:07.037 "high_priority_weight": 0, 00:05:07.037 "nvme_adminq_poll_period_us": 10000, 00:05:07.037 "nvme_ioq_poll_period_us": 0, 00:05:07.037 "io_queue_requests": 0, 00:05:07.037 "delay_cmd_submit": true, 00:05:07.037 "transport_retry_count": 4, 00:05:07.037 "bdev_retry_count": 3, 00:05:07.037 "transport_ack_timeout": 0, 00:05:07.037 "ctrlr_loss_timeout_sec": 0, 00:05:07.037 "reconnect_delay_sec": 0, 00:05:07.037 "fast_io_fail_timeout_sec": 0, 00:05:07.037 "disable_auto_failback": false, 00:05:07.037 "generate_uuids": false, 00:05:07.037 "transport_tos": 0, 00:05:07.037 "nvme_error_stat": false, 00:05:07.037 "rdma_srq_size": 0, 00:05:07.037 "io_path_stat": false, 00:05:07.037 "allow_accel_sequence": false, 00:05:07.037 "rdma_max_cq_size": 0, 00:05:07.037 "rdma_cm_event_timeout_ms": 0, 00:05:07.037 "dhchap_digests": [ 00:05:07.037 "sha256", 00:05:07.037 "sha384", 00:05:07.037 "sha512" 00:05:07.037 ], 00:05:07.037 "dhchap_dhgroups": [ 00:05:07.037 "null", 00:05:07.037 "ffdhe2048", 00:05:07.037 "ffdhe3072", 00:05:07.037 "ffdhe4096", 00:05:07.037 "ffdhe6144", 00:05:07.037 "ffdhe8192" 00:05:07.037 ] 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "bdev_nvme_set_hotplug", 00:05:07.037 "params": { 00:05:07.037 "period_us": 100000, 00:05:07.037 "enable": false 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "bdev_wait_for_examine" 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "scsi", 00:05:07.037 "config": null 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "scheduler", 00:05:07.037 "config": [ 00:05:07.037 { 00:05:07.037 "method": "framework_set_scheduler", 00:05:07.037 "params": { 00:05:07.037 "name": "static" 00:05:07.037 } 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "vhost_scsi", 00:05:07.037 "config": [] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "vhost_blk", 00:05:07.037 "config": [] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "ublk", 00:05:07.037 "config": [] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "nbd", 00:05:07.037 "config": [] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "nvmf", 00:05:07.037 "config": [ 00:05:07.037 { 00:05:07.037 "method": "nvmf_set_config", 00:05:07.037 "params": { 00:05:07.037 "discovery_filter": "match_any", 00:05:07.037 "admin_cmd_passthru": { 00:05:07.037 "identify_ctrlr": false 00:05:07.037 }, 00:05:07.037 "dhchap_digests": [ 00:05:07.037 "sha256", 00:05:07.037 "sha384", 00:05:07.037 "sha512" 00:05:07.037 ], 00:05:07.037 "dhchap_dhgroups": [ 00:05:07.037 "null", 00:05:07.037 "ffdhe2048", 00:05:07.037 "ffdhe3072", 00:05:07.037 "ffdhe4096", 00:05:07.037 "ffdhe6144", 00:05:07.037 "ffdhe8192" 00:05:07.037 ] 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "nvmf_set_max_subsystems", 00:05:07.037 "params": { 00:05:07.037 "max_subsystems": 1024 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "nvmf_set_crdt", 00:05:07.037 "params": { 00:05:07.037 "crdt1": 0, 00:05:07.037 "crdt2": 0, 00:05:07.037 "crdt3": 0 00:05:07.037 } 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "method": "nvmf_create_transport", 00:05:07.037 "params": { 00:05:07.037 "trtype": "TCP", 00:05:07.037 "max_queue_depth": 128, 00:05:07.037 "max_io_qpairs_per_ctrlr": 127, 00:05:07.037 "in_capsule_data_size": 4096, 00:05:07.037 "max_io_size": 131072, 00:05:07.037 "io_unit_size": 131072, 00:05:07.037 "max_aq_depth": 128, 00:05:07.037 "num_shared_buffers": 511, 00:05:07.037 "buf_cache_size": 4294967295, 00:05:07.037 "dif_insert_or_strip": false, 00:05:07.037 "zcopy": false, 00:05:07.037 "c2h_success": true, 00:05:07.037 "sock_priority": 0, 00:05:07.037 "abort_timeout_sec": 1, 00:05:07.037 "ack_timeout": 0, 00:05:07.037 "data_wr_pool_size": 0 00:05:07.037 } 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 }, 00:05:07.037 { 00:05:07.037 "subsystem": "iscsi", 00:05:07.037 "config": [ 00:05:07.037 { 00:05:07.037 "method": "iscsi_set_options", 00:05:07.037 "params": { 00:05:07.037 "node_base": "iqn.2016-06.io.spdk", 00:05:07.037 "max_sessions": 128, 00:05:07.037 "max_connections_per_session": 2, 00:05:07.037 "max_queue_depth": 64, 00:05:07.037 "default_time2wait": 2, 00:05:07.037 "default_time2retain": 20, 00:05:07.037 "first_burst_length": 8192, 00:05:07.037 "immediate_data": true, 00:05:07.037 "allow_duplicated_isid": false, 00:05:07.037 "error_recovery_level": 0, 00:05:07.037 "nop_timeout": 60, 00:05:07.037 "nop_in_interval": 30, 00:05:07.037 "disable_chap": false, 00:05:07.037 "require_chap": false, 00:05:07.037 "mutual_chap": false, 00:05:07.037 "chap_group": 0, 00:05:07.037 "max_large_datain_per_connection": 64, 00:05:07.037 "max_r2t_per_connection": 4, 00:05:07.037 "pdu_pool_size": 36864, 00:05:07.037 "immediate_data_pool_size": 16384, 00:05:07.037 "data_out_pool_size": 2048 00:05:07.037 } 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 } 00:05:07.037 ] 00:05:07.037 } 00:05:07.037 08:16:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:07.037 08:16:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57346 00:05:07.037 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57346 ']' 00:05:07.037 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57346 00:05:07.038 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:07.038 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.038 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57346 00:05:07.297 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.297 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.297 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57346' 00:05:07.297 killing process with pid 57346 00:05:07.297 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57346 00:05:07.297 08:16:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57346 00:05:09.833 08:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57402 00:05:09.833 08:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.833 08:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57402 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57402 ']' 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57402 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57402 00:05:15.122 killing process with pid 57402 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57402' 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57402 00:05:15.122 08:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57402 00:05:17.660 08:16:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.660 08:16:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.660 00:05:17.660 real 0m12.106s 00:05:17.660 user 0m11.561s 00:05:17.660 sys 0m0.887s 00:05:17.660 ************************************ 00:05:17.660 END TEST skip_rpc_with_json 00:05:17.660 ************************************ 00:05:17.660 08:16:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.661 08:16:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.661 08:16:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.661 08:16:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.661 08:16:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.661 ************************************ 00:05:17.661 START TEST skip_rpc_with_delay 00:05:17.661 ************************************ 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:17.661 08:16:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.921 [2024-12-13 08:16:30.085752] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.921 ************************************ 00:05:17.921 END TEST skip_rpc_with_delay 00:05:17.921 ************************************ 00:05:17.921 08:16:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:17.921 08:16:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.921 08:16:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.921 08:16:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.921 00:05:17.921 real 0m0.183s 00:05:17.921 user 0m0.100s 00:05:17.921 sys 0m0.081s 00:05:17.921 08:16:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.921 08:16:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.921 08:16:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.921 08:16:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.921 08:16:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.921 08:16:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.921 08:16:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.921 08:16:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.921 ************************************ 00:05:17.921 START TEST exit_on_failed_rpc_init 00:05:17.921 ************************************ 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57541 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57541 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57541 ']' 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.921 08:16:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.180 [2024-12-13 08:16:30.328742] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:18.180 [2024-12-13 08:16:30.329012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57541 ] 00:05:18.180 [2024-12-13 08:16:30.497182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.439 [2024-12-13 08:16:30.636681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:19.379 08:16:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.638 [2024-12-13 08:16:31.804903] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:19.638 [2024-12-13 08:16:31.805041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57559 ] 00:05:19.638 [2024-12-13 08:16:31.979223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.898 [2024-12-13 08:16:32.121055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.898 [2024-12-13 08:16:32.121193] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:19.898 [2024-12-13 08:16:32.121212] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:19.898 [2024-12-13 08:16:32.121227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57541 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57541 ']' 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57541 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57541 00:05:20.158 killing process with pid 57541 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57541' 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57541 00:05:20.158 08:16:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57541 00:05:23.448 00:05:23.448 real 0m5.058s 00:05:23.448 user 0m5.507s 00:05:23.448 sys 0m0.606s 00:05:23.448 ************************************ 00:05:23.448 END TEST exit_on_failed_rpc_init 00:05:23.448 ************************************ 00:05:23.448 08:16:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.448 08:16:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.448 08:16:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.448 00:05:23.448 real 0m25.444s 00:05:23.448 user 0m24.554s 00:05:23.448 sys 0m2.218s 00:05:23.448 08:16:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.448 ************************************ 00:05:23.448 END TEST skip_rpc 00:05:23.448 ************************************ 00:05:23.448 08:16:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.448 08:16:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:23.448 08:16:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.448 08:16:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.448 08:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.448 ************************************ 00:05:23.448 START TEST rpc_client 00:05:23.448 ************************************ 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:23.448 * Looking for test storage... 00:05:23.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.448 08:16:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.448 --rc genhtml_branch_coverage=1 00:05:23.448 --rc genhtml_function_coverage=1 00:05:23.448 --rc genhtml_legend=1 00:05:23.448 --rc geninfo_all_blocks=1 00:05:23.448 --rc geninfo_unexecuted_blocks=1 00:05:23.448 00:05:23.448 ' 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.448 --rc genhtml_branch_coverage=1 00:05:23.448 --rc genhtml_function_coverage=1 00:05:23.448 --rc genhtml_legend=1 00:05:23.448 --rc geninfo_all_blocks=1 00:05:23.448 --rc geninfo_unexecuted_blocks=1 00:05:23.448 00:05:23.448 ' 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.448 --rc genhtml_branch_coverage=1 00:05:23.448 --rc genhtml_function_coverage=1 00:05:23.448 --rc genhtml_legend=1 00:05:23.448 --rc geninfo_all_blocks=1 00:05:23.448 --rc geninfo_unexecuted_blocks=1 00:05:23.448 00:05:23.448 ' 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.448 --rc genhtml_branch_coverage=1 00:05:23.448 --rc genhtml_function_coverage=1 00:05:23.448 --rc genhtml_legend=1 00:05:23.448 --rc geninfo_all_blocks=1 00:05:23.448 --rc geninfo_unexecuted_blocks=1 00:05:23.448 00:05:23.448 ' 00:05:23.448 08:16:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:23.448 OK 00:05:23.448 08:16:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.448 ************************************ 00:05:23.448 END TEST rpc_client 00:05:23.448 ************************************ 00:05:23.448 00:05:23.448 real 0m0.237s 00:05:23.448 user 0m0.130s 00:05:23.448 sys 0m0.117s 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.448 08:16:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:23.448 08:16:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:23.448 08:16:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.448 08:16:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.448 08:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.448 ************************************ 00:05:23.448 START TEST json_config 00:05:23.448 ************************************ 00:05:23.448 08:16:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:23.448 08:16:35 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.448 08:16:35 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.448 08:16:35 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.448 08:16:35 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.448 08:16:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.448 08:16:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.448 08:16:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.448 08:16:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.448 08:16:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.448 08:16:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.448 08:16:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.448 08:16:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.448 08:16:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.448 08:16:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.448 08:16:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.448 08:16:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:23.448 08:16:35 json_config -- scripts/common.sh@345 -- # : 1 00:05:23.448 08:16:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.448 08:16:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.448 08:16:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:23.713 08:16:35 json_config -- scripts/common.sh@353 -- # local d=1 00:05:23.713 08:16:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.713 08:16:35 json_config -- scripts/common.sh@355 -- # echo 1 00:05:23.713 08:16:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.713 08:16:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:23.713 08:16:35 json_config -- scripts/common.sh@353 -- # local d=2 00:05:23.713 08:16:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.713 08:16:35 json_config -- scripts/common.sh@355 -- # echo 2 00:05:23.713 08:16:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.713 08:16:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.713 08:16:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.713 08:16:35 json_config -- scripts/common.sh@368 -- # return 0 00:05:23.713 08:16:35 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.713 08:16:35 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.713 --rc genhtml_branch_coverage=1 00:05:23.713 --rc genhtml_function_coverage=1 00:05:23.713 --rc genhtml_legend=1 00:05:23.713 --rc geninfo_all_blocks=1 00:05:23.713 --rc geninfo_unexecuted_blocks=1 00:05:23.713 00:05:23.713 ' 00:05:23.713 08:16:35 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.713 --rc genhtml_branch_coverage=1 00:05:23.713 --rc genhtml_function_coverage=1 00:05:23.713 --rc genhtml_legend=1 00:05:23.713 --rc geninfo_all_blocks=1 00:05:23.713 --rc geninfo_unexecuted_blocks=1 00:05:23.713 00:05:23.713 ' 00:05:23.713 08:16:35 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.713 --rc genhtml_branch_coverage=1 00:05:23.713 --rc genhtml_function_coverage=1 00:05:23.713 --rc genhtml_legend=1 00:05:23.713 --rc geninfo_all_blocks=1 00:05:23.713 --rc geninfo_unexecuted_blocks=1 00:05:23.713 00:05:23.713 ' 00:05:23.713 08:16:35 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.713 --rc genhtml_branch_coverage=1 00:05:23.713 --rc genhtml_function_coverage=1 00:05:23.713 --rc genhtml_legend=1 00:05:23.713 --rc geninfo_all_blocks=1 00:05:23.713 --rc geninfo_unexecuted_blocks=1 00:05:23.713 00:05:23.713 ' 00:05:23.713 08:16:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:194d8c4c-be9a-4294-b99e-c6d77342eeb3 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=194d8c4c-be9a-4294-b99e-c6d77342eeb3 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.713 08:16:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.713 08:16:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.713 08:16:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.713 08:16:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.714 08:16:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.714 08:16:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.714 08:16:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.714 08:16:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.714 08:16:35 json_config -- paths/export.sh@5 -- # export PATH 00:05:23.714 08:16:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@51 -- # : 0 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.714 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.714 08:16:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.714 08:16:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:23.714 08:16:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:23.714 08:16:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:23.714 08:16:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:23.714 08:16:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:23.714 08:16:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:23.714 WARNING: No tests are enabled so not running JSON configuration tests 00:05:23.714 08:16:35 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:23.714 00:05:23.714 real 0m0.224s 00:05:23.714 user 0m0.140s 00:05:23.714 sys 0m0.087s 00:05:23.714 08:16:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.714 08:16:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.714 ************************************ 00:05:23.714 END TEST json_config 00:05:23.714 ************************************ 00:05:23.714 08:16:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.714 08:16:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.714 08:16:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.714 08:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.714 ************************************ 00:05:23.714 START TEST json_config_extra_key 00:05:23.714 ************************************ 00:05:23.714 08:16:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.714 08:16:35 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.714 08:16:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.714 08:16:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.980 08:16:36 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.980 08:16:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:23.980 08:16:36 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.980 08:16:36 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.980 --rc genhtml_branch_coverage=1 00:05:23.980 --rc genhtml_function_coverage=1 00:05:23.980 --rc genhtml_legend=1 00:05:23.980 --rc geninfo_all_blocks=1 00:05:23.980 --rc geninfo_unexecuted_blocks=1 00:05:23.980 00:05:23.980 ' 00:05:23.980 08:16:36 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.980 --rc genhtml_branch_coverage=1 00:05:23.980 --rc genhtml_function_coverage=1 00:05:23.980 --rc genhtml_legend=1 00:05:23.981 --rc geninfo_all_blocks=1 00:05:23.981 --rc geninfo_unexecuted_blocks=1 00:05:23.981 00:05:23.981 ' 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.981 --rc genhtml_branch_coverage=1 00:05:23.981 --rc genhtml_function_coverage=1 00:05:23.981 --rc genhtml_legend=1 00:05:23.981 --rc geninfo_all_blocks=1 00:05:23.981 --rc geninfo_unexecuted_blocks=1 00:05:23.981 00:05:23.981 ' 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.981 --rc genhtml_branch_coverage=1 00:05:23.981 --rc genhtml_function_coverage=1 00:05:23.981 --rc genhtml_legend=1 00:05:23.981 --rc geninfo_all_blocks=1 00:05:23.981 --rc geninfo_unexecuted_blocks=1 00:05:23.981 00:05:23.981 ' 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:194d8c4c-be9a-4294-b99e-c6d77342eeb3 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=194d8c4c-be9a-4294-b99e-c6d77342eeb3 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.981 08:16:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.981 08:16:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.981 08:16:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.981 08:16:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.981 08:16:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.981 08:16:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.981 08:16:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.981 08:16:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.981 08:16:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.981 08:16:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.981 INFO: launching applications... 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.981 08:16:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57775 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.981 Waiting for target to run... 00:05:23.981 08:16:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57775 /var/tmp/spdk_tgt.sock 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57775 ']' 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.981 08:16:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.981 [2024-12-13 08:16:36.259727] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:23.981 [2024-12-13 08:16:36.259981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57775 ] 00:05:24.551 [2024-12-13 08:16:36.651778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.551 [2024-12-13 08:16:36.806522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.491 08:16:37 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.491 08:16:37 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.491 00:05:25.491 08:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.491 INFO: shutting down applications... 00:05:25.491 08:16:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57775 ]] 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57775 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:25.491 08:16:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.058 08:16:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.058 08:16:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.058 08:16:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:26.058 08:16:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.316 08:16:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.316 08:16:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.316 08:16:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:26.316 08:16:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.885 08:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.885 08:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.885 08:16:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:26.885 08:16:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.454 08:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.454 08:16:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.454 08:16:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:27.454 08:16:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.023 08:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.023 08:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.023 08:16:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:28.023 08:16:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.593 08:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.593 08:16:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.593 08:16:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:28.593 08:16:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.852 08:16:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.852 08:16:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.852 08:16:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:05:28.852 08:16:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.852 08:16:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.852 08:16:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.852 SPDK target shutdown done 00:05:28.852 Success 00:05:28.852 08:16:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.852 08:16:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.852 00:05:28.852 real 0m5.274s 00:05:28.852 user 0m4.869s 00:05:28.852 sys 0m0.562s 00:05:28.852 ************************************ 00:05:28.852 END TEST json_config_extra_key 00:05:28.852 ************************************ 00:05:28.852 08:16:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.852 08:16:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.112 08:16:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.112 08:16:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.112 08:16:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.112 08:16:41 -- common/autotest_common.sh@10 -- # set +x 00:05:29.112 ************************************ 00:05:29.112 START TEST alias_rpc 00:05:29.112 ************************************ 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.112 * Looking for test storage... 00:05:29.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.112 08:16:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.112 --rc genhtml_branch_coverage=1 00:05:29.112 --rc genhtml_function_coverage=1 00:05:29.112 --rc genhtml_legend=1 00:05:29.112 --rc geninfo_all_blocks=1 00:05:29.112 --rc geninfo_unexecuted_blocks=1 00:05:29.112 00:05:29.112 ' 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.112 --rc genhtml_branch_coverage=1 00:05:29.112 --rc genhtml_function_coverage=1 00:05:29.112 --rc genhtml_legend=1 00:05:29.112 --rc geninfo_all_blocks=1 00:05:29.112 --rc geninfo_unexecuted_blocks=1 00:05:29.112 00:05:29.112 ' 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.112 --rc genhtml_branch_coverage=1 00:05:29.112 --rc genhtml_function_coverage=1 00:05:29.112 --rc genhtml_legend=1 00:05:29.112 --rc geninfo_all_blocks=1 00:05:29.112 --rc geninfo_unexecuted_blocks=1 00:05:29.112 00:05:29.112 ' 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.112 --rc genhtml_branch_coverage=1 00:05:29.112 --rc genhtml_function_coverage=1 00:05:29.112 --rc genhtml_legend=1 00:05:29.112 --rc geninfo_all_blocks=1 00:05:29.112 --rc geninfo_unexecuted_blocks=1 00:05:29.112 00:05:29.112 ' 00:05:29.112 08:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.112 08:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57893 00:05:29.112 08:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.112 08:16:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57893 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57893 ']' 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.112 08:16:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.372 [2024-12-13 08:16:41.523675] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:29.372 [2024-12-13 08:16:41.524012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57893 ] 00:05:29.372 [2024-12-13 08:16:41.708347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.631 [2024-12-13 08:16:41.841906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.570 08:16:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.570 08:16:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.570 08:16:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:30.829 08:16:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57893 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57893 ']' 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57893 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57893 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.829 killing process with pid 57893 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57893' 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 57893 00:05:30.829 08:16:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 57893 00:05:33.361 ************************************ 00:05:33.361 END TEST alias_rpc 00:05:33.361 ************************************ 00:05:33.361 00:05:33.361 real 0m4.469s 00:05:33.361 user 0m4.541s 00:05:33.361 sys 0m0.555s 00:05:33.361 08:16:45 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.361 08:16:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.620 08:16:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:33.620 08:16:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:33.620 08:16:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.620 08:16:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.620 08:16:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.620 ************************************ 00:05:33.620 START TEST spdkcli_tcp 00:05:33.620 ************************************ 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:33.620 * Looking for test storage... 00:05:33.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.620 08:16:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.620 --rc genhtml_branch_coverage=1 00:05:33.620 --rc genhtml_function_coverage=1 00:05:33.620 --rc genhtml_legend=1 00:05:33.620 --rc geninfo_all_blocks=1 00:05:33.620 --rc geninfo_unexecuted_blocks=1 00:05:33.620 00:05:33.620 ' 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.620 --rc genhtml_branch_coverage=1 00:05:33.620 --rc genhtml_function_coverage=1 00:05:33.620 --rc genhtml_legend=1 00:05:33.620 --rc geninfo_all_blocks=1 00:05:33.620 --rc geninfo_unexecuted_blocks=1 00:05:33.620 00:05:33.620 ' 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.620 --rc genhtml_branch_coverage=1 00:05:33.620 --rc genhtml_function_coverage=1 00:05:33.620 --rc genhtml_legend=1 00:05:33.620 --rc geninfo_all_blocks=1 00:05:33.620 --rc geninfo_unexecuted_blocks=1 00:05:33.620 00:05:33.620 ' 00:05:33.620 08:16:45 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.620 --rc genhtml_branch_coverage=1 00:05:33.620 --rc genhtml_function_coverage=1 00:05:33.620 --rc genhtml_legend=1 00:05:33.620 --rc geninfo_all_blocks=1 00:05:33.620 --rc geninfo_unexecuted_blocks=1 00:05:33.620 00:05:33.620 ' 00:05:33.620 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:33.620 08:16:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58005 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:33.621 08:16:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58005 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58005 ']' 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.621 08:16:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.879 [2024-12-13 08:16:46.057757] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:33.879 [2024-12-13 08:16:46.057881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58005 ] 00:05:33.879 [2024-12-13 08:16:46.237673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.138 [2024-12-13 08:16:46.368877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.138 [2024-12-13 08:16:46.368913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.074 08:16:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.074 08:16:47 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:35.074 08:16:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58022 00:05:35.074 08:16:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.074 08:16:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.334 [ 00:05:35.334 "bdev_malloc_delete", 00:05:35.334 "bdev_malloc_create", 00:05:35.334 "bdev_null_resize", 00:05:35.334 "bdev_null_delete", 00:05:35.334 "bdev_null_create", 00:05:35.334 "bdev_nvme_cuse_unregister", 00:05:35.334 "bdev_nvme_cuse_register", 00:05:35.334 "bdev_opal_new_user", 00:05:35.334 "bdev_opal_set_lock_state", 00:05:35.334 "bdev_opal_delete", 00:05:35.334 "bdev_opal_get_info", 00:05:35.334 "bdev_opal_create", 00:05:35.334 "bdev_nvme_opal_revert", 00:05:35.334 "bdev_nvme_opal_init", 00:05:35.334 "bdev_nvme_send_cmd", 00:05:35.334 "bdev_nvme_set_keys", 00:05:35.334 "bdev_nvme_get_path_iostat", 00:05:35.334 "bdev_nvme_get_mdns_discovery_info", 00:05:35.334 "bdev_nvme_stop_mdns_discovery", 00:05:35.334 "bdev_nvme_start_mdns_discovery", 00:05:35.334 "bdev_nvme_set_multipath_policy", 00:05:35.334 "bdev_nvme_set_preferred_path", 00:05:35.334 "bdev_nvme_get_io_paths", 00:05:35.334 "bdev_nvme_remove_error_injection", 00:05:35.334 "bdev_nvme_add_error_injection", 00:05:35.334 "bdev_nvme_get_discovery_info", 00:05:35.334 "bdev_nvme_stop_discovery", 00:05:35.334 "bdev_nvme_start_discovery", 00:05:35.334 "bdev_nvme_get_controller_health_info", 00:05:35.334 "bdev_nvme_disable_controller", 00:05:35.334 "bdev_nvme_enable_controller", 00:05:35.334 "bdev_nvme_reset_controller", 00:05:35.334 "bdev_nvme_get_transport_statistics", 00:05:35.334 "bdev_nvme_apply_firmware", 00:05:35.334 "bdev_nvme_detach_controller", 00:05:35.334 "bdev_nvme_get_controllers", 00:05:35.334 "bdev_nvme_attach_controller", 00:05:35.334 "bdev_nvme_set_hotplug", 00:05:35.334 "bdev_nvme_set_options", 00:05:35.334 "bdev_passthru_delete", 00:05:35.334 "bdev_passthru_create", 00:05:35.334 "bdev_lvol_set_parent_bdev", 00:05:35.334 "bdev_lvol_set_parent", 00:05:35.334 "bdev_lvol_check_shallow_copy", 00:05:35.334 "bdev_lvol_start_shallow_copy", 00:05:35.334 "bdev_lvol_grow_lvstore", 00:05:35.334 "bdev_lvol_get_lvols", 00:05:35.334 "bdev_lvol_get_lvstores", 00:05:35.334 "bdev_lvol_delete", 00:05:35.334 "bdev_lvol_set_read_only", 00:05:35.334 "bdev_lvol_resize", 00:05:35.334 "bdev_lvol_decouple_parent", 00:05:35.334 "bdev_lvol_inflate", 00:05:35.335 "bdev_lvol_rename", 00:05:35.335 "bdev_lvol_clone_bdev", 00:05:35.335 "bdev_lvol_clone", 00:05:35.335 "bdev_lvol_snapshot", 00:05:35.335 "bdev_lvol_create", 00:05:35.335 "bdev_lvol_delete_lvstore", 00:05:35.335 "bdev_lvol_rename_lvstore", 00:05:35.335 "bdev_lvol_create_lvstore", 00:05:35.335 "bdev_raid_set_options", 00:05:35.335 "bdev_raid_remove_base_bdev", 00:05:35.335 "bdev_raid_add_base_bdev", 00:05:35.335 "bdev_raid_delete", 00:05:35.335 "bdev_raid_create", 00:05:35.335 "bdev_raid_get_bdevs", 00:05:35.335 "bdev_error_inject_error", 00:05:35.335 "bdev_error_delete", 00:05:35.335 "bdev_error_create", 00:05:35.335 "bdev_split_delete", 00:05:35.335 "bdev_split_create", 00:05:35.335 "bdev_delay_delete", 00:05:35.335 "bdev_delay_create", 00:05:35.335 "bdev_delay_update_latency", 00:05:35.335 "bdev_zone_block_delete", 00:05:35.335 "bdev_zone_block_create", 00:05:35.335 "blobfs_create", 00:05:35.335 "blobfs_detect", 00:05:35.335 "blobfs_set_cache_size", 00:05:35.335 "bdev_aio_delete", 00:05:35.335 "bdev_aio_rescan", 00:05:35.335 "bdev_aio_create", 00:05:35.335 "bdev_ftl_set_property", 00:05:35.335 "bdev_ftl_get_properties", 00:05:35.335 "bdev_ftl_get_stats", 00:05:35.335 "bdev_ftl_unmap", 00:05:35.335 "bdev_ftl_unload", 00:05:35.335 "bdev_ftl_delete", 00:05:35.335 "bdev_ftl_load", 00:05:35.335 "bdev_ftl_create", 00:05:35.335 "bdev_virtio_attach_controller", 00:05:35.335 "bdev_virtio_scsi_get_devices", 00:05:35.335 "bdev_virtio_detach_controller", 00:05:35.335 "bdev_virtio_blk_set_hotplug", 00:05:35.335 "bdev_iscsi_delete", 00:05:35.335 "bdev_iscsi_create", 00:05:35.335 "bdev_iscsi_set_options", 00:05:35.335 "accel_error_inject_error", 00:05:35.335 "ioat_scan_accel_module", 00:05:35.335 "dsa_scan_accel_module", 00:05:35.335 "iaa_scan_accel_module", 00:05:35.335 "keyring_file_remove_key", 00:05:35.335 "keyring_file_add_key", 00:05:35.335 "keyring_linux_set_options", 00:05:35.335 "fsdev_aio_delete", 00:05:35.335 "fsdev_aio_create", 00:05:35.335 "iscsi_get_histogram", 00:05:35.335 "iscsi_enable_histogram", 00:05:35.335 "iscsi_set_options", 00:05:35.335 "iscsi_get_auth_groups", 00:05:35.335 "iscsi_auth_group_remove_secret", 00:05:35.335 "iscsi_auth_group_add_secret", 00:05:35.335 "iscsi_delete_auth_group", 00:05:35.335 "iscsi_create_auth_group", 00:05:35.335 "iscsi_set_discovery_auth", 00:05:35.335 "iscsi_get_options", 00:05:35.335 "iscsi_target_node_request_logout", 00:05:35.335 "iscsi_target_node_set_redirect", 00:05:35.335 "iscsi_target_node_set_auth", 00:05:35.335 "iscsi_target_node_add_lun", 00:05:35.335 "iscsi_get_stats", 00:05:35.335 "iscsi_get_connections", 00:05:35.335 "iscsi_portal_group_set_auth", 00:05:35.335 "iscsi_start_portal_group", 00:05:35.335 "iscsi_delete_portal_group", 00:05:35.335 "iscsi_create_portal_group", 00:05:35.335 "iscsi_get_portal_groups", 00:05:35.335 "iscsi_delete_target_node", 00:05:35.335 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.335 "iscsi_target_node_add_pg_ig_maps", 00:05:35.335 "iscsi_create_target_node", 00:05:35.335 "iscsi_get_target_nodes", 00:05:35.335 "iscsi_delete_initiator_group", 00:05:35.335 "iscsi_initiator_group_remove_initiators", 00:05:35.335 "iscsi_initiator_group_add_initiators", 00:05:35.335 "iscsi_create_initiator_group", 00:05:35.335 "iscsi_get_initiator_groups", 00:05:35.335 "nvmf_set_crdt", 00:05:35.335 "nvmf_set_config", 00:05:35.335 "nvmf_set_max_subsystems", 00:05:35.335 "nvmf_stop_mdns_prr", 00:05:35.335 "nvmf_publish_mdns_prr", 00:05:35.335 "nvmf_subsystem_get_listeners", 00:05:35.335 "nvmf_subsystem_get_qpairs", 00:05:35.335 "nvmf_subsystem_get_controllers", 00:05:35.335 "nvmf_get_stats", 00:05:35.335 "nvmf_get_transports", 00:05:35.335 "nvmf_create_transport", 00:05:35.335 "nvmf_get_targets", 00:05:35.335 "nvmf_delete_target", 00:05:35.335 "nvmf_create_target", 00:05:35.335 "nvmf_subsystem_allow_any_host", 00:05:35.335 "nvmf_subsystem_set_keys", 00:05:35.335 "nvmf_subsystem_remove_host", 00:05:35.335 "nvmf_subsystem_add_host", 00:05:35.335 "nvmf_ns_remove_host", 00:05:35.335 "nvmf_ns_add_host", 00:05:35.335 "nvmf_subsystem_remove_ns", 00:05:35.335 "nvmf_subsystem_set_ns_ana_group", 00:05:35.335 "nvmf_subsystem_add_ns", 00:05:35.335 "nvmf_subsystem_listener_set_ana_state", 00:05:35.335 "nvmf_discovery_get_referrals", 00:05:35.335 "nvmf_discovery_remove_referral", 00:05:35.335 "nvmf_discovery_add_referral", 00:05:35.335 "nvmf_subsystem_remove_listener", 00:05:35.335 "nvmf_subsystem_add_listener", 00:05:35.335 "nvmf_delete_subsystem", 00:05:35.335 "nvmf_create_subsystem", 00:05:35.335 "nvmf_get_subsystems", 00:05:35.335 "env_dpdk_get_mem_stats", 00:05:35.335 "nbd_get_disks", 00:05:35.335 "nbd_stop_disk", 00:05:35.335 "nbd_start_disk", 00:05:35.335 "ublk_recover_disk", 00:05:35.335 "ublk_get_disks", 00:05:35.335 "ublk_stop_disk", 00:05:35.335 "ublk_start_disk", 00:05:35.335 "ublk_destroy_target", 00:05:35.335 "ublk_create_target", 00:05:35.335 "virtio_blk_create_transport", 00:05:35.335 "virtio_blk_get_transports", 00:05:35.335 "vhost_controller_set_coalescing", 00:05:35.335 "vhost_get_controllers", 00:05:35.335 "vhost_delete_controller", 00:05:35.335 "vhost_create_blk_controller", 00:05:35.335 "vhost_scsi_controller_remove_target", 00:05:35.335 "vhost_scsi_controller_add_target", 00:05:35.335 "vhost_start_scsi_controller", 00:05:35.335 "vhost_create_scsi_controller", 00:05:35.335 "thread_set_cpumask", 00:05:35.335 "scheduler_set_options", 00:05:35.335 "framework_get_governor", 00:05:35.335 "framework_get_scheduler", 00:05:35.335 "framework_set_scheduler", 00:05:35.335 "framework_get_reactors", 00:05:35.335 "thread_get_io_channels", 00:05:35.335 "thread_get_pollers", 00:05:35.335 "thread_get_stats", 00:05:35.335 "framework_monitor_context_switch", 00:05:35.335 "spdk_kill_instance", 00:05:35.335 "log_enable_timestamps", 00:05:35.335 "log_get_flags", 00:05:35.335 "log_clear_flag", 00:05:35.335 "log_set_flag", 00:05:35.335 "log_get_level", 00:05:35.335 "log_set_level", 00:05:35.335 "log_get_print_level", 00:05:35.335 "log_set_print_level", 00:05:35.335 "framework_enable_cpumask_locks", 00:05:35.335 "framework_disable_cpumask_locks", 00:05:35.335 "framework_wait_init", 00:05:35.335 "framework_start_init", 00:05:35.335 "scsi_get_devices", 00:05:35.335 "bdev_get_histogram", 00:05:35.335 "bdev_enable_histogram", 00:05:35.335 "bdev_set_qos_limit", 00:05:35.335 "bdev_set_qd_sampling_period", 00:05:35.335 "bdev_get_bdevs", 00:05:35.335 "bdev_reset_iostat", 00:05:35.335 "bdev_get_iostat", 00:05:35.335 "bdev_examine", 00:05:35.335 "bdev_wait_for_examine", 00:05:35.335 "bdev_set_options", 00:05:35.335 "accel_get_stats", 00:05:35.335 "accel_set_options", 00:05:35.335 "accel_set_driver", 00:05:35.335 "accel_crypto_key_destroy", 00:05:35.335 "accel_crypto_keys_get", 00:05:35.335 "accel_crypto_key_create", 00:05:35.335 "accel_assign_opc", 00:05:35.335 "accel_get_module_info", 00:05:35.335 "accel_get_opc_assignments", 00:05:35.335 "vmd_rescan", 00:05:35.335 "vmd_remove_device", 00:05:35.335 "vmd_enable", 00:05:35.335 "sock_get_default_impl", 00:05:35.335 "sock_set_default_impl", 00:05:35.335 "sock_impl_set_options", 00:05:35.335 "sock_impl_get_options", 00:05:35.335 "iobuf_get_stats", 00:05:35.335 "iobuf_set_options", 00:05:35.335 "keyring_get_keys", 00:05:35.335 "framework_get_pci_devices", 00:05:35.335 "framework_get_config", 00:05:35.335 "framework_get_subsystems", 00:05:35.335 "fsdev_set_opts", 00:05:35.335 "fsdev_get_opts", 00:05:35.335 "trace_get_info", 00:05:35.335 "trace_get_tpoint_group_mask", 00:05:35.335 "trace_disable_tpoint_group", 00:05:35.335 "trace_enable_tpoint_group", 00:05:35.335 "trace_clear_tpoint_mask", 00:05:35.335 "trace_set_tpoint_mask", 00:05:35.335 "notify_get_notifications", 00:05:35.335 "notify_get_types", 00:05:35.335 "spdk_get_version", 00:05:35.335 "rpc_get_methods" 00:05:35.335 ] 00:05:35.335 08:16:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.335 08:16:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.335 08:16:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58005 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58005 ']' 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58005 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58005 00:05:35.335 killing process with pid 58005 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58005' 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58005 00:05:35.335 08:16:47 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58005 00:05:38.621 ************************************ 00:05:38.621 END TEST spdkcli_tcp 00:05:38.621 ************************************ 00:05:38.621 00:05:38.621 real 0m4.549s 00:05:38.621 user 0m8.188s 00:05:38.621 sys 0m0.634s 00:05:38.621 08:16:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.621 08:16:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 08:16:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.621 08:16:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.621 08:16:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.621 08:16:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 ************************************ 00:05:38.621 START TEST dpdk_mem_utility 00:05:38.621 ************************************ 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.621 * Looking for test storage... 00:05:38.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.621 08:16:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.621 --rc genhtml_branch_coverage=1 00:05:38.621 --rc genhtml_function_coverage=1 00:05:38.621 --rc genhtml_legend=1 00:05:38.621 --rc geninfo_all_blocks=1 00:05:38.621 --rc geninfo_unexecuted_blocks=1 00:05:38.621 00:05:38.621 ' 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.621 --rc genhtml_branch_coverage=1 00:05:38.621 --rc genhtml_function_coverage=1 00:05:38.621 --rc genhtml_legend=1 00:05:38.621 --rc geninfo_all_blocks=1 00:05:38.621 --rc geninfo_unexecuted_blocks=1 00:05:38.621 00:05:38.621 ' 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.621 --rc genhtml_branch_coverage=1 00:05:38.621 --rc genhtml_function_coverage=1 00:05:38.621 --rc genhtml_legend=1 00:05:38.621 --rc geninfo_all_blocks=1 00:05:38.621 --rc geninfo_unexecuted_blocks=1 00:05:38.621 00:05:38.621 ' 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.621 --rc genhtml_branch_coverage=1 00:05:38.621 --rc genhtml_function_coverage=1 00:05:38.621 --rc genhtml_legend=1 00:05:38.621 --rc geninfo_all_blocks=1 00:05:38.621 --rc geninfo_unexecuted_blocks=1 00:05:38.621 00:05:38.621 ' 00:05:38.621 08:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.621 08:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58133 00:05:38.621 08:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.621 08:16:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58133 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58133 ']' 00:05:38.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.621 08:16:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 [2024-12-13 08:16:50.692181] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:38.621 [2024-12-13 08:16:50.692406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58133 ] 00:05:38.621 [2024-12-13 08:16:50.870681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.879 [2024-12-13 08:16:50.996453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.818 08:16:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.818 08:16:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:39.818 08:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.818 08:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.818 08:16:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.818 08:16:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.818 { 00:05:39.818 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.818 } 00:05:39.818 08:16:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.818 08:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:39.818 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:39.818 1 heaps totaling size 824.000000 MiB 00:05:39.818 size: 824.000000 MiB heap id: 0 00:05:39.818 end heaps---------- 00:05:39.818 9 mempools totaling size 603.782043 MiB 00:05:39.818 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.818 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.818 size: 100.555481 MiB name: bdev_io_58133 00:05:39.818 size: 50.003479 MiB name: msgpool_58133 00:05:39.818 size: 36.509338 MiB name: fsdev_io_58133 00:05:39.818 size: 21.763794 MiB name: PDU_Pool 00:05:39.818 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.818 size: 4.133484 MiB name: evtpool_58133 00:05:39.818 size: 0.026123 MiB name: Session_Pool 00:05:39.818 end mempools------- 00:05:39.818 6 memzones totaling size 4.142822 MiB 00:05:39.818 size: 1.000366 MiB name: RG_ring_0_58133 00:05:39.818 size: 1.000366 MiB name: RG_ring_1_58133 00:05:39.818 size: 1.000366 MiB name: RG_ring_4_58133 00:05:39.818 size: 1.000366 MiB name: RG_ring_5_58133 00:05:39.818 size: 0.125366 MiB name: RG_ring_2_58133 00:05:39.818 size: 0.015991 MiB name: RG_ring_3_58133 00:05:39.818 end memzones------- 00:05:39.818 08:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.818 heap id: 0 total size: 824.000000 MiB number of busy elements: 311 number of free elements: 18 00:05:39.818 list of free elements. size: 16.782349 MiB 00:05:39.818 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:39.818 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:39.818 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:39.818 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:39.818 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:39.818 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:39.818 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:39.818 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:39.818 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:39.818 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:39.818 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:39.818 element at address: 0x20001b400000 with size: 0.563660 MiB 00:05:39.818 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:39.818 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:39.818 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:39.818 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:39.818 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:39.818 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:39.818 list of standard malloc elements. size: 199.286743 MiB 00:05:39.818 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:39.818 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:39.818 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:39.818 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:39.818 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:39.818 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:39.818 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:39.818 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:39.818 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:39.818 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:39.818 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:39.818 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:39.818 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:39.818 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:39.818 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:39.818 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:39.818 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:39.819 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:39.819 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:39.820 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:39.820 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:39.820 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:39.820 list of memzone associated elements. size: 607.930908 MiB 00:05:39.820 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:39.820 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.820 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:39.820 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.820 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:39.820 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58133_0 00:05:39.820 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:39.820 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58133_0 00:05:39.820 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:39.820 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58133_0 00:05:39.820 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:39.820 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.820 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:39.820 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.820 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:39.820 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58133_0 00:05:39.820 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:39.820 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58133 00:05:39.820 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:39.820 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58133 00:05:39.820 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:39.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.820 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:39.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.820 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:39.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.820 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:39.820 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.820 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:39.820 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58133 00:05:39.820 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:39.820 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58133 00:05:39.820 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:39.820 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58133 00:05:39.820 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:39.820 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58133 00:05:39.820 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:39.820 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58133 00:05:39.820 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:39.820 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58133 00:05:39.820 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:39.820 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.820 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:39.820 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.821 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:39.821 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.821 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:39.821 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58133 00:05:39.821 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:39.821 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58133 00:05:39.821 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:39.821 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.821 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:39.821 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.821 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:39.821 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58133 00:05:39.821 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:39.821 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.821 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:39.821 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58133 00:05:39.821 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:39.821 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58133 00:05:39.821 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:39.821 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58133 00:05:39.821 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:39.821 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.821 08:16:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.821 08:16:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58133 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58133 ']' 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58133 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58133 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58133' 00:05:39.821 killing process with pid 58133 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58133 00:05:39.821 08:16:52 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58133 00:05:42.357 00:05:42.357 real 0m4.255s 00:05:42.357 user 0m4.195s 00:05:42.357 sys 0m0.584s 00:05:42.357 08:16:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.357 08:16:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.357 ************************************ 00:05:42.357 END TEST dpdk_mem_utility 00:05:42.357 ************************************ 00:05:42.357 08:16:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.357 08:16:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.357 08:16:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.357 08:16:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.357 ************************************ 00:05:42.357 START TEST event 00:05:42.357 ************************************ 00:05:42.357 08:16:54 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.616 * Looking for test storage... 00:05:42.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.616 08:16:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.616 08:16:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.616 08:16:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.616 08:16:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.616 08:16:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.616 08:16:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.616 08:16:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.616 08:16:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.616 08:16:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.616 08:16:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.616 08:16:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.616 08:16:54 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.616 08:16:54 event -- scripts/common.sh@345 -- # : 1 00:05:42.616 08:16:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.616 08:16:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.616 08:16:54 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.616 08:16:54 event -- scripts/common.sh@353 -- # local d=1 00:05:42.616 08:16:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.616 08:16:54 event -- scripts/common.sh@355 -- # echo 1 00:05:42.616 08:16:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.616 08:16:54 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.616 08:16:54 event -- scripts/common.sh@353 -- # local d=2 00:05:42.616 08:16:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.616 08:16:54 event -- scripts/common.sh@355 -- # echo 2 00:05:42.616 08:16:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.616 08:16:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.616 08:16:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.616 08:16:54 event -- scripts/common.sh@368 -- # return 0 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.616 --rc genhtml_branch_coverage=1 00:05:42.616 --rc genhtml_function_coverage=1 00:05:42.616 --rc genhtml_legend=1 00:05:42.616 --rc geninfo_all_blocks=1 00:05:42.616 --rc geninfo_unexecuted_blocks=1 00:05:42.616 00:05:42.616 ' 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.616 --rc genhtml_branch_coverage=1 00:05:42.616 --rc genhtml_function_coverage=1 00:05:42.616 --rc genhtml_legend=1 00:05:42.616 --rc geninfo_all_blocks=1 00:05:42.616 --rc geninfo_unexecuted_blocks=1 00:05:42.616 00:05:42.616 ' 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.616 --rc genhtml_branch_coverage=1 00:05:42.616 --rc genhtml_function_coverage=1 00:05:42.616 --rc genhtml_legend=1 00:05:42.616 --rc geninfo_all_blocks=1 00:05:42.616 --rc geninfo_unexecuted_blocks=1 00:05:42.616 00:05:42.616 ' 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.616 --rc genhtml_branch_coverage=1 00:05:42.616 --rc genhtml_function_coverage=1 00:05:42.616 --rc genhtml_legend=1 00:05:42.616 --rc geninfo_all_blocks=1 00:05:42.616 --rc geninfo_unexecuted_blocks=1 00:05:42.616 00:05:42.616 ' 00:05:42.616 08:16:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.616 08:16:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.616 08:16:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:42.616 08:16:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.616 08:16:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.616 ************************************ 00:05:42.616 START TEST event_perf 00:05:42.616 ************************************ 00:05:42.616 08:16:54 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.616 Running I/O for 1 seconds...[2024-12-13 08:16:54.959556] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:42.616 [2024-12-13 08:16:54.959734] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58241 ] 00:05:42.877 [2024-12-13 08:16:55.137853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.139 [2024-12-13 08:16:55.280191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.139 [2024-12-13 08:16:55.280297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.139 [2024-12-13 08:16:55.280344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.139 Running I/O for 1 seconds...[2024-12-13 08:16:55.280359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.516 00:05:44.516 lcore 0: 196515 00:05:44.516 lcore 1: 196514 00:05:44.516 lcore 2: 196516 00:05:44.516 lcore 3: 196514 00:05:44.516 done. 00:05:44.516 00:05:44.516 real 0m1.626s 00:05:44.516 user 0m4.383s 00:05:44.516 sys 0m0.118s 00:05:44.516 08:16:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.516 08:16:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.516 ************************************ 00:05:44.516 END TEST event_perf 00:05:44.516 ************************************ 00:05:44.516 08:16:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.516 08:16:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:44.516 08:16:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.516 08:16:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.516 ************************************ 00:05:44.516 START TEST event_reactor 00:05:44.516 ************************************ 00:05:44.516 08:16:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.516 [2024-12-13 08:16:56.642171] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:44.516 [2024-12-13 08:16:56.642273] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:05:44.516 [2024-12-13 08:16:56.817847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.775 [2024-12-13 08:16:56.940266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.156 test_start 00:05:46.156 oneshot 00:05:46.156 tick 100 00:05:46.156 tick 100 00:05:46.156 tick 250 00:05:46.156 tick 100 00:05:46.156 tick 100 00:05:46.156 tick 250 00:05:46.156 tick 100 00:05:46.156 tick 500 00:05:46.156 tick 100 00:05:46.156 tick 100 00:05:46.156 tick 250 00:05:46.156 tick 100 00:05:46.156 tick 100 00:05:46.156 test_end 00:05:46.156 00:05:46.156 real 0m1.569s 00:05:46.156 user 0m1.372s 00:05:46.156 sys 0m0.087s 00:05:46.156 ************************************ 00:05:46.156 END TEST event_reactor 00:05:46.156 ************************************ 00:05:46.156 08:16:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.156 08:16:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:46.156 08:16:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.156 08:16:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:46.156 08:16:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.156 08:16:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.156 ************************************ 00:05:46.156 START TEST event_reactor_perf 00:05:46.156 ************************************ 00:05:46.156 08:16:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.156 [2024-12-13 08:16:58.272419] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:46.156 [2024-12-13 08:16:58.272627] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58317 ] 00:05:46.156 [2024-12-13 08:16:58.448749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.415 [2024-12-13 08:16:58.577084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.797 test_start 00:05:47.797 test_end 00:05:47.797 Performance: 357711 events per second 00:05:47.797 00:05:47.797 real 0m1.577s 00:05:47.797 user 0m1.369s 00:05:47.797 sys 0m0.096s 00:05:47.797 ************************************ 00:05:47.797 END TEST event_reactor_perf 00:05:47.797 ************************************ 00:05:47.797 08:16:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.797 08:16:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 08:16:59 event -- event/event.sh@49 -- # uname -s 00:05:47.797 08:16:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:47.797 08:16:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.797 08:16:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.797 08:16:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.797 08:16:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.797 ************************************ 00:05:47.797 START TEST event_scheduler 00:05:47.797 ************************************ 00:05:47.797 08:16:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.797 * Looking for test storage... 00:05:47.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.797 08:17:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.797 --rc genhtml_branch_coverage=1 00:05:47.797 --rc genhtml_function_coverage=1 00:05:47.797 --rc genhtml_legend=1 00:05:47.797 --rc geninfo_all_blocks=1 00:05:47.797 --rc geninfo_unexecuted_blocks=1 00:05:47.797 00:05:47.797 ' 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.797 --rc genhtml_branch_coverage=1 00:05:47.797 --rc genhtml_function_coverage=1 00:05:47.797 --rc genhtml_legend=1 00:05:47.797 --rc geninfo_all_blocks=1 00:05:47.797 --rc geninfo_unexecuted_blocks=1 00:05:47.797 00:05:47.797 ' 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.797 --rc genhtml_branch_coverage=1 00:05:47.797 --rc genhtml_function_coverage=1 00:05:47.797 --rc genhtml_legend=1 00:05:47.797 --rc geninfo_all_blocks=1 00:05:47.797 --rc geninfo_unexecuted_blocks=1 00:05:47.797 00:05:47.797 ' 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.797 --rc genhtml_branch_coverage=1 00:05:47.797 --rc genhtml_function_coverage=1 00:05:47.797 --rc genhtml_legend=1 00:05:47.797 --rc geninfo_all_blocks=1 00:05:47.797 --rc geninfo_unexecuted_blocks=1 00:05:47.797 00:05:47.797 ' 00:05:47.797 08:17:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:47.797 08:17:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58393 00:05:47.797 08:17:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:47.797 08:17:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.797 08:17:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58393 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58393 ']' 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.797 08:17:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.056 [2024-12-13 08:17:00.178273] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:48.057 [2024-12-13 08:17:00.178508] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58393 ] 00:05:48.057 [2024-12-13 08:17:00.352930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.315 [2024-12-13 08:17:00.475740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.315 [2024-12-13 08:17:00.476002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.315 [2024-12-13 08:17:00.475960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.315 [2024-12-13 08:17:00.475917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.882 08:17:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.882 08:17:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:48.882 08:17:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:48.882 08:17:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.882 08:17:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.882 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.882 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.882 POWER: Cannot set governor of lcore 0 to performance 00:05:48.882 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.882 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.882 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.882 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.882 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:48.882 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:48.882 POWER: Unable to set Power Management Environment for lcore 0 00:05:48.882 [2024-12-13 08:17:01.089136] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:48.882 [2024-12-13 08:17:01.089195] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:48.882 [2024-12-13 08:17:01.089241] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:48.882 [2024-12-13 08:17:01.089287] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:48.882 [2024-12-13 08:17:01.089329] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:48.882 [2024-12-13 08:17:01.089375] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:48.882 08:17:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.882 08:17:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:48.882 08:17:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.882 08:17:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 [2024-12-13 08:17:01.425653] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.142 08:17:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.142 08:17:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.142 08:17:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.142 08:17:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.142 08:17:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 ************************************ 00:05:49.142 START TEST scheduler_create_thread 00:05:49.142 ************************************ 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 2 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 3 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 4 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 5 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.142 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.401 6 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.402 7 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.402 8 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.402 9 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.402 10 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.402 08:17:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.800 08:17:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.800 08:17:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:50.800 08:17:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:50.800 08:17:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.800 08:17:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.370 08:17:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.370 08:17:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.370 08:17:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.370 08:17:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.309 08:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.309 08:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.309 08:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.309 08:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.309 08:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.248 08:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.248 00:05:53.248 real 0m3.884s 00:05:53.248 ************************************ 00:05:53.248 END TEST scheduler_create_thread 00:05:53.248 ************************************ 00:05:53.248 user 0m0.026s 00:05:53.248 sys 0m0.007s 00:05:53.248 08:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.248 08:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.248 08:17:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.248 08:17:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58393 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58393 ']' 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58393 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58393 00:05:53.248 killing process with pid 58393 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58393' 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58393 00:05:53.248 08:17:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58393 00:05:53.507 [2024-12-13 08:17:05.703961] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.890 00:05:54.890 real 0m7.014s 00:05:54.890 user 0m14.660s 00:05:54.890 sys 0m0.537s 00:05:54.890 08:17:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.890 08:17:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.890 ************************************ 00:05:54.890 END TEST event_scheduler 00:05:54.890 ************************************ 00:05:54.890 08:17:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.890 08:17:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.890 08:17:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.890 08:17:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.890 08:17:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.890 ************************************ 00:05:54.890 START TEST app_repeat 00:05:54.890 ************************************ 00:05:54.890 08:17:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58521 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58521' 00:05:54.890 Process app_repeat pid: 58521 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.890 spdk_app_start Round 0 00:05:54.890 08:17:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58521 /var/tmp/spdk-nbd.sock 00:05:54.890 08:17:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58521 ']' 00:05:54.890 08:17:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.890 08:17:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.890 08:17:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.890 08:17:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.890 08:17:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.890 [2024-12-13 08:17:07.033628] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:54.890 [2024-12-13 08:17:07.034231] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58521 ] 00:05:54.890 [2024-12-13 08:17:07.195442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.150 [2024-12-13 08:17:07.363645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.150 [2024-12-13 08:17:07.363685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.720 08:17:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.720 08:17:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.720 08:17:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.979 Malloc0 00:05:55.979 08:17:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.549 Malloc1 00:05:56.549 08:17:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.549 /dev/nbd0 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.549 1+0 records in 00:05:56.549 1+0 records out 00:05:56.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571142 s, 7.2 MB/s 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.549 08:17:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.549 08:17:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.824 /dev/nbd1 00:05:56.824 08:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.824 08:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.824 1+0 records in 00:05:56.824 1+0 records out 00:05:56.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273272 s, 15.0 MB/s 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.824 08:17:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.824 08:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.824 08:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.824 08:17:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.824 08:17:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.824 08:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.084 { 00:05:57.084 "nbd_device": "/dev/nbd0", 00:05:57.084 "bdev_name": "Malloc0" 00:05:57.084 }, 00:05:57.084 { 00:05:57.084 "nbd_device": "/dev/nbd1", 00:05:57.084 "bdev_name": "Malloc1" 00:05:57.084 } 00:05:57.084 ]' 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.084 { 00:05:57.084 "nbd_device": "/dev/nbd0", 00:05:57.084 "bdev_name": "Malloc0" 00:05:57.084 }, 00:05:57.084 { 00:05:57.084 "nbd_device": "/dev/nbd1", 00:05:57.084 "bdev_name": "Malloc1" 00:05:57.084 } 00:05:57.084 ]' 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.084 /dev/nbd1' 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.084 /dev/nbd1' 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.084 256+0 records in 00:05:57.084 256+0 records out 00:05:57.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572772 s, 183 MB/s 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.084 08:17:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.344 256+0 records in 00:05:57.344 256+0 records out 00:05:57.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244475 s, 42.9 MB/s 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.345 256+0 records in 00:05:57.345 256+0 records out 00:05:57.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255406 s, 41.1 MB/s 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.345 08:17:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.605 08:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.864 08:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.864 08:17:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.864 08:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.864 08:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.124 08:17:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.124 08:17:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.384 08:17:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.765 [2024-12-13 08:17:11.960644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.765 [2024-12-13 08:17:12.116219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.765 [2024-12-13 08:17:12.116221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.024 [2024-12-13 08:17:12.314690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.024 [2024-12-13 08:17:12.314902] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.404 spdk_app_start Round 1 00:06:01.405 08:17:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.405 08:17:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:01.405 08:17:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58521 /var/tmp/spdk-nbd.sock 00:06:01.405 08:17:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58521 ']' 00:06:01.405 08:17:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.405 08:17:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.405 08:17:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.405 08:17:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.405 08:17:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.664 08:17:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.664 08:17:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.664 08:17:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.923 Malloc0 00:06:02.182 08:17:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.464 Malloc1 00:06:02.464 08:17:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.464 08:17:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.464 08:17:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.464 08:17:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.464 08:17:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.465 08:17:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.465 /dev/nbd0 00:06:02.723 08:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.723 08:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.723 08:17:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.723 08:17:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.724 1+0 records in 00:06:02.724 1+0 records out 00:06:02.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409002 s, 10.0 MB/s 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.724 08:17:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.724 08:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.724 08:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.724 08:17:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.983 /dev/nbd1 00:06:02.983 08:17:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.983 08:17:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.983 1+0 records in 00:06:02.983 1+0 records out 00:06:02.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257145 s, 15.9 MB/s 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.983 08:17:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.983 08:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.983 08:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.983 08:17:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.983 08:17:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.983 08:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.243 { 00:06:03.243 "nbd_device": "/dev/nbd0", 00:06:03.243 "bdev_name": "Malloc0" 00:06:03.243 }, 00:06:03.243 { 00:06:03.243 "nbd_device": "/dev/nbd1", 00:06:03.243 "bdev_name": "Malloc1" 00:06:03.243 } 00:06:03.243 ]' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.243 { 00:06:03.243 "nbd_device": "/dev/nbd0", 00:06:03.243 "bdev_name": "Malloc0" 00:06:03.243 }, 00:06:03.243 { 00:06:03.243 "nbd_device": "/dev/nbd1", 00:06:03.243 "bdev_name": "Malloc1" 00:06:03.243 } 00:06:03.243 ]' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.243 /dev/nbd1' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.243 /dev/nbd1' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.243 256+0 records in 00:06:03.243 256+0 records out 00:06:03.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134422 s, 78.0 MB/s 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.243 256+0 records in 00:06:03.243 256+0 records out 00:06:03.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227451 s, 46.1 MB/s 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.243 256+0 records in 00:06:03.243 256+0 records out 00:06:03.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293461 s, 35.7 MB/s 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.243 08:17:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.503 08:17:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.763 08:17:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.022 08:17:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.022 08:17:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.591 08:17:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.972 [2024-12-13 08:17:18.006182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.972 [2024-12-13 08:17:18.124541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.972 [2024-12-13 08:17:18.124562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.972 [2024-12-13 08:17:18.318069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.972 [2024-12-13 08:17:18.318182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.905 spdk_app_start Round 2 00:06:07.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.905 08:17:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.905 08:17:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:07.905 08:17:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58521 /var/tmp/spdk-nbd.sock 00:06:07.905 08:17:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58521 ']' 00:06:07.905 08:17:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.905 08:17:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.905 08:17:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.905 08:17:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.905 08:17:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.905 08:17:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.905 08:17:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.905 08:17:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.163 Malloc0 00:06:08.163 08:17:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.423 Malloc1 00:06:08.423 08:17:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.423 08:17:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.683 /dev/nbd0 00:06:08.683 08:17:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.683 08:17:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.683 1+0 records in 00:06:08.683 1+0 records out 00:06:08.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576283 s, 7.1 MB/s 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.683 08:17:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:08.683 08:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.683 08:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.683 08:17:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.943 /dev/nbd1 00:06:08.943 08:17:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.943 08:17:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.943 1+0 records in 00:06:08.943 1+0 records out 00:06:08.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380219 s, 10.8 MB/s 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.943 08:17:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:08.943 08:17:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.943 08:17:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.943 08:17:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.943 08:17:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.943 08:17:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.203 { 00:06:09.203 "nbd_device": "/dev/nbd0", 00:06:09.203 "bdev_name": "Malloc0" 00:06:09.203 }, 00:06:09.203 { 00:06:09.203 "nbd_device": "/dev/nbd1", 00:06:09.203 "bdev_name": "Malloc1" 00:06:09.203 } 00:06:09.203 ]' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.203 { 00:06:09.203 "nbd_device": "/dev/nbd0", 00:06:09.203 "bdev_name": "Malloc0" 00:06:09.203 }, 00:06:09.203 { 00:06:09.203 "nbd_device": "/dev/nbd1", 00:06:09.203 "bdev_name": "Malloc1" 00:06:09.203 } 00:06:09.203 ]' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.203 /dev/nbd1' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.203 /dev/nbd1' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.203 256+0 records in 00:06:09.203 256+0 records out 00:06:09.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142223 s, 73.7 MB/s 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.203 256+0 records in 00:06:09.203 256+0 records out 00:06:09.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247194 s, 42.4 MB/s 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.203 256+0 records in 00:06:09.203 256+0 records out 00:06:09.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249232 s, 42.1 MB/s 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.203 08:17:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.463 08:17:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.464 08:17:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.724 08:17:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.724 08:17:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.724 08:17:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.724 08:17:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.724 08:17:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.724 08:17:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.724 08:17:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.724 08:17:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.724 08:17:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.724 08:17:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.984 08:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.243 08:17:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.244 08:17:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.502 08:17:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.881 [2024-12-13 08:17:23.979695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.881 [2024-12-13 08:17:24.090670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.881 [2024-12-13 08:17:24.090675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.141 [2024-12-13 08:17:24.278218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.141 [2024-12-13 08:17:24.278398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.519 08:17:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58521 /var/tmp/spdk-nbd.sock 00:06:13.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.519 08:17:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58521 ']' 00:06:13.519 08:17:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.519 08:17:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.519 08:17:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.519 08:17:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.519 08:17:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:13.778 08:17:26 event.app_repeat -- event/event.sh@39 -- # killprocess 58521 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58521 ']' 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58521 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58521 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58521' 00:06:13.778 killing process with pid 58521 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58521 00:06:13.778 08:17:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58521 00:06:15.156 spdk_app_start is called in Round 0. 00:06:15.156 Shutdown signal received, stop current app iteration 00:06:15.156 Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 reinitialization... 00:06:15.156 spdk_app_start is called in Round 1. 00:06:15.156 Shutdown signal received, stop current app iteration 00:06:15.156 Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 reinitialization... 00:06:15.156 spdk_app_start is called in Round 2. 00:06:15.156 Shutdown signal received, stop current app iteration 00:06:15.156 Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 reinitialization... 00:06:15.156 spdk_app_start is called in Round 3. 00:06:15.156 Shutdown signal received, stop current app iteration 00:06:15.156 08:17:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.156 ************************************ 00:06:15.156 END TEST app_repeat 00:06:15.156 ************************************ 00:06:15.156 08:17:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.156 00:06:15.156 real 0m20.343s 00:06:15.156 user 0m43.909s 00:06:15.156 sys 0m2.920s 00:06:15.156 08:17:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.156 08:17:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.156 08:17:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.156 08:17:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:15.156 08:17:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.156 08:17:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.156 08:17:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.156 ************************************ 00:06:15.156 START TEST cpu_locks 00:06:15.156 ************************************ 00:06:15.156 08:17:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:15.156 * Looking for test storage... 00:06:15.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:15.156 08:17:27 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.156 08:17:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.156 08:17:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.415 08:17:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:15.415 08:17:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.416 08:17:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.416 08:17:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.416 08:17:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.416 --rc genhtml_branch_coverage=1 00:06:15.416 --rc genhtml_function_coverage=1 00:06:15.416 --rc genhtml_legend=1 00:06:15.416 --rc geninfo_all_blocks=1 00:06:15.416 --rc geninfo_unexecuted_blocks=1 00:06:15.416 00:06:15.416 ' 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.416 --rc genhtml_branch_coverage=1 00:06:15.416 --rc genhtml_function_coverage=1 00:06:15.416 --rc genhtml_legend=1 00:06:15.416 --rc geninfo_all_blocks=1 00:06:15.416 --rc geninfo_unexecuted_blocks=1 00:06:15.416 00:06:15.416 ' 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.416 --rc genhtml_branch_coverage=1 00:06:15.416 --rc genhtml_function_coverage=1 00:06:15.416 --rc genhtml_legend=1 00:06:15.416 --rc geninfo_all_blocks=1 00:06:15.416 --rc geninfo_unexecuted_blocks=1 00:06:15.416 00:06:15.416 ' 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.416 --rc genhtml_branch_coverage=1 00:06:15.416 --rc genhtml_function_coverage=1 00:06:15.416 --rc genhtml_legend=1 00:06:15.416 --rc geninfo_all_blocks=1 00:06:15.416 --rc geninfo_unexecuted_blocks=1 00:06:15.416 00:06:15.416 ' 00:06:15.416 08:17:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.416 08:17:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.416 08:17:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.416 08:17:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.416 08:17:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.416 ************************************ 00:06:15.416 START TEST default_locks 00:06:15.416 ************************************ 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58976 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58976 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58976 ']' 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.416 08:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.416 [2024-12-13 08:17:27.709115] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:15.416 [2024-12-13 08:17:27.709760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58976 ] 00:06:15.674 [2024-12-13 08:17:27.871344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.674 [2024-12-13 08:17:28.013704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58976 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58976 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58976 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58976 ']' 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58976 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58976 00:06:17.049 killing process with pid 58976 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58976' 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58976 00:06:17.049 08:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58976 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58976 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58976 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.332 ERROR: process (pid: 58976) is no longer running 00:06:20.332 ************************************ 00:06:20.332 END TEST default_locks 00:06:20.332 ************************************ 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58976 00:06:20.332 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58976 ']' 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.333 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58976) - No such process 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.333 00:06:20.333 real 0m4.507s 00:06:20.333 user 0m4.478s 00:06:20.333 sys 0m0.622s 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.333 08:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.333 08:17:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:20.333 08:17:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.333 08:17:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.333 08:17:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.333 ************************************ 00:06:20.333 START TEST default_locks_via_rpc 00:06:20.333 ************************************ 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59051 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59051 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59051 ']' 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.333 08:17:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.333 [2024-12-13 08:17:32.261261] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:20.333 [2024-12-13 08:17:32.261501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:06:20.333 [2024-12-13 08:17:32.426719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.333 [2024-12-13 08:17:32.566795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59051 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59051 00:06:21.269 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59051 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59051 ']' 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59051 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59051 00:06:21.528 killing process with pid 59051 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59051' 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59051 00:06:21.528 08:17:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59051 00:06:24.813 00:06:24.813 real 0m4.337s 00:06:24.813 user 0m4.350s 00:06:24.813 sys 0m0.607s 00:06:24.813 08:17:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.813 08:17:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 ************************************ 00:06:24.813 END TEST default_locks_via_rpc 00:06:24.813 ************************************ 00:06:24.813 08:17:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.813 08:17:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.813 08:17:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.813 08:17:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 ************************************ 00:06:24.813 START TEST non_locking_app_on_locked_coremask 00:06:24.813 ************************************ 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59130 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59130 /var/tmp/spdk.sock 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.813 08:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 [2024-12-13 08:17:36.658935] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:24.813 [2024-12-13 08:17:36.659096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:06:24.813 [2024-12-13 08:17:36.839827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.813 [2024-12-13 08:17:36.958550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59152 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59152 /var/tmp/spdk2.sock 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59152 ']' 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.749 08:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.749 [2024-12-13 08:17:38.005197] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:25.749 [2024-12-13 08:17:38.005438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:06:26.008 [2024-12-13 08:17:38.188793] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.008 [2024-12-13 08:17:38.188899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.265 [2024-12-13 08:17:38.463061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.800 08:17:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.800 08:17:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.800 08:17:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59130 00:06:28.800 08:17:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59130 00:06:28.800 08:17:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59130 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59130 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:06:28.800 killing process with pid 59130 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59130 00:06:28.800 08:17:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59130 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59152 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59152 ']' 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59152 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59152 00:06:34.087 killing process with pid 59152 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59152' 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59152 00:06:34.087 08:17:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59152 00:06:36.626 00:06:36.626 real 0m12.200s 00:06:36.626 user 0m12.548s 00:06:36.626 sys 0m1.261s 00:06:36.626 ************************************ 00:06:36.626 END TEST non_locking_app_on_locked_coremask 00:06:36.626 ************************************ 00:06:36.626 08:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.626 08:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.626 08:17:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:36.626 08:17:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.626 08:17:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.626 08:17:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.626 ************************************ 00:06:36.626 START TEST locking_app_on_unlocked_coremask 00:06:36.626 ************************************ 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59303 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59303 /var/tmp/spdk.sock 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59303 ']' 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.626 08:17:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.626 [2024-12-13 08:17:48.907504] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:36.626 [2024-12-13 08:17:48.907618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59303 ] 00:06:36.886 [2024-12-13 08:17:49.082263] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.886 [2024-12-13 08:17:49.082318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.886 [2024-12-13 08:17:49.208721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59325 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59325 /var/tmp/spdk2.sock 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59325 ']' 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.824 08:17:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.083 [2024-12-13 08:17:50.238185] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:38.083 [2024-12-13 08:17:50.238387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:06:38.083 [2024-12-13 08:17:50.410426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.342 [2024-12-13 08:17:50.648820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.937 08:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.937 08:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.937 08:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59325 00:06:40.937 08:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59325 00:06:40.937 08:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59303 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59303 ']' 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59303 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59303 00:06:41.196 killing process with pid 59303 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59303' 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59303 00:06:41.196 08:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59303 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59325 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59325 ']' 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59325 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59325 00:06:46.466 killing process with pid 59325 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59325' 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59325 00:06:46.466 08:17:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59325 00:06:48.370 00:06:48.370 real 0m11.873s 00:06:48.370 user 0m12.229s 00:06:48.370 sys 0m1.228s 00:06:48.370 08:18:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.370 ************************************ 00:06:48.370 END TEST locking_app_on_unlocked_coremask 00:06:48.370 ************************************ 00:06:48.370 08:18:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.629 08:18:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:48.629 08:18:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.629 08:18:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.629 08:18:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.629 ************************************ 00:06:48.629 START TEST locking_app_on_locked_coremask 00:06:48.629 ************************************ 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59475 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59475 /var/tmp/spdk.sock 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59475 ']' 00:06:48.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.629 08:18:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.630 [2024-12-13 08:18:00.852669] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:48.630 [2024-12-13 08:18:00.852869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59475 ] 00:06:48.889 [2024-12-13 08:18:01.027183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.889 [2024-12-13 08:18:01.149752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59496 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59496 /var/tmp/spdk2.sock 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59496 /var/tmp/spdk2.sock 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59496 /var/tmp/spdk2.sock 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59496 ']' 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.894 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.894 [2024-12-13 08:18:02.147264] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:49.894 [2024-12-13 08:18:02.147487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59496 ] 00:06:50.153 [2024-12-13 08:18:02.322371] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59475 has claimed it. 00:06:50.153 [2024-12-13 08:18:02.322442] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.412 ERROR: process (pid: 59496) is no longer running 00:06:50.412 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59496) - No such process 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59475 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59475 00:06:50.412 08:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59475 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59475 ']' 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59475 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59475 00:06:50.983 killing process with pid 59475 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.983 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59475' 00:06:50.984 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59475 00:06:50.984 08:18:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59475 00:06:53.521 00:06:53.521 real 0m4.841s 00:06:53.521 user 0m5.022s 00:06:53.521 sys 0m0.759s 00:06:53.521 08:18:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.522 08:18:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.522 ************************************ 00:06:53.522 END TEST locking_app_on_locked_coremask 00:06:53.522 ************************************ 00:06:53.522 08:18:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.522 08:18:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.522 08:18:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.522 08:18:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.522 ************************************ 00:06:53.522 START TEST locking_overlapped_coremask 00:06:53.522 ************************************ 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59566 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59566 /var/tmp/spdk.sock 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59566 ']' 00:06:53.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.522 08:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.522 [2024-12-13 08:18:05.752817] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:53.522 [2024-12-13 08:18:05.752941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59566 ] 00:06:53.781 [2024-12-13 08:18:05.914975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.781 [2024-12-13 08:18:06.039996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.781 [2024-12-13 08:18:06.040162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.781 [2024-12-13 08:18:06.040202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59584 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59584 /var/tmp/spdk2.sock 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59584 /var/tmp/spdk2.sock 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:54.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59584 /var/tmp/spdk2.sock 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59584 ']' 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.721 08:18:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.980 [2024-12-13 08:18:07.103349] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:54.980 [2024-12-13 08:18:07.103612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59584 ] 00:06:54.980 [2024-12-13 08:18:07.305370] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59566 has claimed it. 00:06:54.980 [2024-12-13 08:18:07.305450] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.551 ERROR: process (pid: 59584) is no longer running 00:06:55.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59584) - No such process 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59566 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59566 ']' 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59566 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59566 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59566' 00:06:55.551 killing process with pid 59566 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59566 00:06:55.551 08:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59566 00:06:58.091 00:06:58.091 real 0m4.603s 00:06:58.091 user 0m12.568s 00:06:58.091 sys 0m0.638s 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.091 ************************************ 00:06:58.091 END TEST locking_overlapped_coremask 00:06:58.091 ************************************ 00:06:58.091 08:18:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:58.091 08:18:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.091 08:18:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.091 08:18:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.091 ************************************ 00:06:58.091 START TEST locking_overlapped_coremask_via_rpc 00:06:58.091 ************************************ 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59648 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59648 /var/tmp/spdk.sock 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59648 ']' 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.091 08:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.091 [2024-12-13 08:18:10.422067] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:58.091 [2024-12-13 08:18:10.422194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:06:58.351 [2024-12-13 08:18:10.595812] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.351 [2024-12-13 08:18:10.595941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.611 [2024-12-13 08:18:10.718736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.611 [2024-12-13 08:18:10.718835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.611 [2024-12-13 08:18:10.718870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59677 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59677 /var/tmp/spdk2.sock 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59677 ']' 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.557 08:18:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.557 [2024-12-13 08:18:11.765382] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:59.557 [2024-12-13 08:18:11.765944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59677 ] 00:06:59.817 [2024-12-13 08:18:11.955896] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.817 [2024-12-13 08:18:11.955950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.077 [2024-12-13 08:18:12.206909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.077 [2024-12-13 08:18:12.206955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.077 [2024-12-13 08:18:12.207018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.610 [2024-12-13 08:18:14.371332] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59648 has claimed it. 00:07:02.610 request: 00:07:02.610 { 00:07:02.610 "method": "framework_enable_cpumask_locks", 00:07:02.610 "req_id": 1 00:07:02.610 } 00:07:02.610 Got JSON-RPC error response 00:07:02.610 response: 00:07:02.610 { 00:07:02.610 "code": -32603, 00:07:02.610 "message": "Failed to claim CPU core: 2" 00:07:02.610 } 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59648 /var/tmp/spdk.sock 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59648 ']' 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59677 /var/tmp/spdk2.sock 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59677 ']' 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.610 00:07:02.610 real 0m4.656s 00:07:02.610 user 0m1.512s 00:07:02.610 sys 0m0.203s 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.610 08:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.610 ************************************ 00:07:02.610 END TEST locking_overlapped_coremask_via_rpc 00:07:02.610 ************************************ 00:07:02.874 08:18:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.874 08:18:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59648 ]] 00:07:02.874 08:18:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59648 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59648 ']' 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59648 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59648 00:07:02.874 killing process with pid 59648 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59648' 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59648 00:07:02.874 08:18:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59648 00:07:06.162 08:18:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59677 ]] 00:07:06.162 08:18:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59677 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59677 ']' 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59677 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59677 00:07:06.162 killing process with pid 59677 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59677' 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59677 00:07:06.162 08:18:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59677 00:07:08.066 08:18:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.066 08:18:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:08.066 08:18:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59648 ]] 00:07:08.066 08:18:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59648 00:07:08.066 Process with pid 59648 is not found 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59648 ']' 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59648 00:07:08.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59648) - No such process 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59648 is not found' 00:07:08.066 08:18:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59677 ]] 00:07:08.066 08:18:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59677 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59677 ']' 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59677 00:07:08.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59677) - No such process 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59677 is not found' 00:07:08.066 Process with pid 59677 is not found 00:07:08.066 08:18:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.066 00:07:08.066 real 0m52.996s 00:07:08.066 user 1m31.579s 00:07:08.066 sys 0m6.570s 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.066 08:18:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.066 ************************************ 00:07:08.066 END TEST cpu_locks 00:07:08.066 ************************************ 00:07:08.066 00:07:08.066 real 1m25.730s 00:07:08.066 user 2m37.522s 00:07:08.066 sys 0m10.696s 00:07:08.066 08:18:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.066 08:18:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.066 ************************************ 00:07:08.066 END TEST event 00:07:08.066 ************************************ 00:07:08.324 08:18:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:08.324 08:18:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.324 08:18:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.324 08:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:08.324 ************************************ 00:07:08.324 START TEST thread 00:07:08.324 ************************************ 00:07:08.324 08:18:20 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:08.325 * Looking for test storage... 00:07:08.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:08.325 08:18:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.325 08:18:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.325 08:18:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.325 08:18:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.325 08:18:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.325 08:18:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.325 08:18:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.325 08:18:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.325 08:18:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.325 08:18:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.325 08:18:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.325 08:18:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:08.325 08:18:20 thread -- scripts/common.sh@345 -- # : 1 00:07:08.325 08:18:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.325 08:18:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.325 08:18:20 thread -- scripts/common.sh@365 -- # decimal 1 00:07:08.325 08:18:20 thread -- scripts/common.sh@353 -- # local d=1 00:07:08.325 08:18:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.325 08:18:20 thread -- scripts/common.sh@355 -- # echo 1 00:07:08.325 08:18:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.325 08:18:20 thread -- scripts/common.sh@366 -- # decimal 2 00:07:08.325 08:18:20 thread -- scripts/common.sh@353 -- # local d=2 00:07:08.325 08:18:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.325 08:18:20 thread -- scripts/common.sh@355 -- # echo 2 00:07:08.325 08:18:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.325 08:18:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.325 08:18:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.325 08:18:20 thread -- scripts/common.sh@368 -- # return 0 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.325 --rc genhtml_branch_coverage=1 00:07:08.325 --rc genhtml_function_coverage=1 00:07:08.325 --rc genhtml_legend=1 00:07:08.325 --rc geninfo_all_blocks=1 00:07:08.325 --rc geninfo_unexecuted_blocks=1 00:07:08.325 00:07:08.325 ' 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.325 --rc genhtml_branch_coverage=1 00:07:08.325 --rc genhtml_function_coverage=1 00:07:08.325 --rc genhtml_legend=1 00:07:08.325 --rc geninfo_all_blocks=1 00:07:08.325 --rc geninfo_unexecuted_blocks=1 00:07:08.325 00:07:08.325 ' 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.325 --rc genhtml_branch_coverage=1 00:07:08.325 --rc genhtml_function_coverage=1 00:07:08.325 --rc genhtml_legend=1 00:07:08.325 --rc geninfo_all_blocks=1 00:07:08.325 --rc geninfo_unexecuted_blocks=1 00:07:08.325 00:07:08.325 ' 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.325 --rc genhtml_branch_coverage=1 00:07:08.325 --rc genhtml_function_coverage=1 00:07:08.325 --rc genhtml_legend=1 00:07:08.325 --rc geninfo_all_blocks=1 00:07:08.325 --rc geninfo_unexecuted_blocks=1 00:07:08.325 00:07:08.325 ' 00:07:08.325 08:18:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.325 08:18:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 ************************************ 00:07:08.325 START TEST thread_poller_perf 00:07:08.325 ************************************ 00:07:08.325 08:18:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:08.584 [2024-12-13 08:18:20.736705] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:08.584 [2024-12-13 08:18:20.736907] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:07:08.584 [2024-12-13 08:18:20.912274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.842 [2024-12-13 08:18:21.049112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.842 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:10.240 [2024-12-13T08:18:22.605Z] ====================================== 00:07:10.240 [2024-12-13T08:18:22.605Z] busy:2301591936 (cyc) 00:07:10.240 [2024-12-13T08:18:22.605Z] total_run_count: 371000 00:07:10.240 [2024-12-13T08:18:22.605Z] tsc_hz: 2290000000 (cyc) 00:07:10.240 [2024-12-13T08:18:22.605Z] ====================================== 00:07:10.240 [2024-12-13T08:18:22.605Z] poller_cost: 6203 (cyc), 2708 (nsec) 00:07:10.240 00:07:10.240 real 0m1.605s 00:07:10.240 user 0m1.397s 00:07:10.240 sys 0m0.099s 00:07:10.240 ************************************ 00:07:10.240 END TEST thread_poller_perf 00:07:10.240 ************************************ 00:07:10.240 08:18:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.240 08:18:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.240 08:18:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.240 08:18:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:10.240 08:18:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.240 08:18:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.240 ************************************ 00:07:10.240 START TEST thread_poller_perf 00:07:10.240 ************************************ 00:07:10.240 08:18:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.240 [2024-12-13 08:18:22.398100] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:10.240 [2024-12-13 08:18:22.398235] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:07:10.240 [2024-12-13 08:18:22.573573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.498 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.498 [2024-12-13 08:18:22.692774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.873 [2024-12-13T08:18:24.238Z] ====================================== 00:07:11.873 [2024-12-13T08:18:24.238Z] busy:2293572868 (cyc) 00:07:11.873 [2024-12-13T08:18:24.238Z] total_run_count: 4287000 00:07:11.873 [2024-12-13T08:18:24.238Z] tsc_hz: 2290000000 (cyc) 00:07:11.873 [2024-12-13T08:18:24.238Z] ====================================== 00:07:11.873 [2024-12-13T08:18:24.238Z] poller_cost: 535 (cyc), 233 (nsec) 00:07:11.873 ************************************ 00:07:11.873 END TEST thread_poller_perf 00:07:11.873 ************************************ 00:07:11.873 00:07:11.873 real 0m1.567s 00:07:11.873 user 0m1.369s 00:07:11.873 sys 0m0.091s 00:07:11.873 08:18:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.873 08:18:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.873 08:18:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:11.873 ************************************ 00:07:11.873 END TEST thread 00:07:11.873 ************************************ 00:07:11.873 00:07:11.873 real 0m3.496s 00:07:11.873 user 0m2.918s 00:07:11.873 sys 0m0.372s 00:07:11.873 08:18:23 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.873 08:18:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.873 08:18:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:11.873 08:18:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:11.873 08:18:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.873 08:18:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.873 08:18:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.873 ************************************ 00:07:11.873 START TEST app_cmdline 00:07:11.873 ************************************ 00:07:11.873 08:18:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:11.873 * Looking for test storage... 00:07:11.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:11.873 08:18:24 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.873 08:18:24 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.873 08:18:24 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.873 08:18:24 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.873 08:18:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:11.873 08:18:24 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.873 08:18:24 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.873 --rc genhtml_branch_coverage=1 00:07:11.873 --rc genhtml_function_coverage=1 00:07:11.873 --rc genhtml_legend=1 00:07:11.873 --rc geninfo_all_blocks=1 00:07:11.873 --rc geninfo_unexecuted_blocks=1 00:07:11.874 00:07:11.874 ' 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.874 --rc genhtml_branch_coverage=1 00:07:11.874 --rc genhtml_function_coverage=1 00:07:11.874 --rc genhtml_legend=1 00:07:11.874 --rc geninfo_all_blocks=1 00:07:11.874 --rc geninfo_unexecuted_blocks=1 00:07:11.874 00:07:11.874 ' 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.874 --rc genhtml_branch_coverage=1 00:07:11.874 --rc genhtml_function_coverage=1 00:07:11.874 --rc genhtml_legend=1 00:07:11.874 --rc geninfo_all_blocks=1 00:07:11.874 --rc geninfo_unexecuted_blocks=1 00:07:11.874 00:07:11.874 ' 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.874 --rc genhtml_branch_coverage=1 00:07:11.874 --rc genhtml_function_coverage=1 00:07:11.874 --rc genhtml_legend=1 00:07:11.874 --rc geninfo_all_blocks=1 00:07:11.874 --rc geninfo_unexecuted_blocks=1 00:07:11.874 00:07:11.874 ' 00:07:11.874 08:18:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.874 08:18:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60005 00:07:11.874 08:18:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60005 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60005 ']' 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.874 08:18:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.874 08:18:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 [2024-12-13 08:18:24.310133] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:12.132 [2024-12-13 08:18:24.310378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60005 ] 00:07:12.132 [2024-12-13 08:18:24.489731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.390 [2024-12-13 08:18:24.620296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.323 08:18:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.323 08:18:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:13.323 08:18:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:13.582 { 00:07:13.582 "version": "SPDK v25.01-pre git sha1 575641720", 00:07:13.582 "fields": { 00:07:13.582 "major": 25, 00:07:13.582 "minor": 1, 00:07:13.582 "patch": 0, 00:07:13.582 "suffix": "-pre", 00:07:13.582 "commit": "575641720" 00:07:13.582 } 00:07:13.582 } 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:13.582 08:18:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.582 08:18:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.583 08:18:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.583 08:18:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.583 08:18:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.583 08:18:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.583 08:18:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:13.583 08:18:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.842 request: 00:07:13.842 { 00:07:13.842 "method": "env_dpdk_get_mem_stats", 00:07:13.842 "req_id": 1 00:07:13.842 } 00:07:13.842 Got JSON-RPC error response 00:07:13.842 response: 00:07:13.842 { 00:07:13.842 "code": -32601, 00:07:13.842 "message": "Method not found" 00:07:13.842 } 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.842 08:18:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60005 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60005 ']' 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60005 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60005 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.842 killing process with pid 60005 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60005' 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 60005 00:07:13.842 08:18:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 60005 00:07:16.390 ************************************ 00:07:16.390 END TEST app_cmdline 00:07:16.390 ************************************ 00:07:16.390 00:07:16.390 real 0m4.709s 00:07:16.390 user 0m5.069s 00:07:16.390 sys 0m0.605s 00:07:16.390 08:18:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.390 08:18:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.650 08:18:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:16.650 08:18:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.650 08:18:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.650 08:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:16.650 ************************************ 00:07:16.650 START TEST version 00:07:16.650 ************************************ 00:07:16.650 08:18:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:16.650 * Looking for test storage... 00:07:16.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:16.650 08:18:28 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:16.650 08:18:28 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:16.650 08:18:28 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:16.650 08:18:28 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:16.650 08:18:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.650 08:18:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.650 08:18:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.650 08:18:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.650 08:18:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.650 08:18:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.650 08:18:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.650 08:18:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.650 08:18:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.650 08:18:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.650 08:18:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.650 08:18:29 version -- scripts/common.sh@344 -- # case "$op" in 00:07:16.650 08:18:29 version -- scripts/common.sh@345 -- # : 1 00:07:16.650 08:18:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.650 08:18:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.650 08:18:29 version -- scripts/common.sh@365 -- # decimal 1 00:07:16.650 08:18:29 version -- scripts/common.sh@353 -- # local d=1 00:07:16.650 08:18:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.650 08:18:29 version -- scripts/common.sh@355 -- # echo 1 00:07:16.650 08:18:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.650 08:18:29 version -- scripts/common.sh@366 -- # decimal 2 00:07:16.910 08:18:29 version -- scripts/common.sh@353 -- # local d=2 00:07:16.910 08:18:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.910 08:18:29 version -- scripts/common.sh@355 -- # echo 2 00:07:16.910 08:18:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.910 08:18:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.910 08:18:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.910 08:18:29 version -- scripts/common.sh@368 -- # return 0 00:07:16.910 08:18:29 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.910 08:18:29 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.910 --rc genhtml_branch_coverage=1 00:07:16.910 --rc genhtml_function_coverage=1 00:07:16.910 --rc genhtml_legend=1 00:07:16.910 --rc geninfo_all_blocks=1 00:07:16.910 --rc geninfo_unexecuted_blocks=1 00:07:16.910 00:07:16.910 ' 00:07:16.910 08:18:29 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.911 --rc genhtml_branch_coverage=1 00:07:16.911 --rc genhtml_function_coverage=1 00:07:16.911 --rc genhtml_legend=1 00:07:16.911 --rc geninfo_all_blocks=1 00:07:16.911 --rc geninfo_unexecuted_blocks=1 00:07:16.911 00:07:16.911 ' 00:07:16.911 08:18:29 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.911 --rc genhtml_branch_coverage=1 00:07:16.911 --rc genhtml_function_coverage=1 00:07:16.911 --rc genhtml_legend=1 00:07:16.911 --rc geninfo_all_blocks=1 00:07:16.911 --rc geninfo_unexecuted_blocks=1 00:07:16.911 00:07:16.911 ' 00:07:16.911 08:18:29 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.911 --rc genhtml_branch_coverage=1 00:07:16.911 --rc genhtml_function_coverage=1 00:07:16.911 --rc genhtml_legend=1 00:07:16.911 --rc geninfo_all_blocks=1 00:07:16.911 --rc geninfo_unexecuted_blocks=1 00:07:16.911 00:07:16.911 ' 00:07:16.911 08:18:29 version -- app/version.sh@17 -- # get_header_version major 00:07:16.911 08:18:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.911 08:18:29 version -- app/version.sh@17 -- # major=25 00:07:16.911 08:18:29 version -- app/version.sh@18 -- # get_header_version minor 00:07:16.911 08:18:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.911 08:18:29 version -- app/version.sh@18 -- # minor=1 00:07:16.911 08:18:29 version -- app/version.sh@19 -- # get_header_version patch 00:07:16.911 08:18:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.911 08:18:29 version -- app/version.sh@19 -- # patch=0 00:07:16.911 08:18:29 version -- app/version.sh@20 -- # get_header_version suffix 00:07:16.911 08:18:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # cut -f2 00:07:16.911 08:18:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.911 08:18:29 version -- app/version.sh@20 -- # suffix=-pre 00:07:16.911 08:18:29 version -- app/version.sh@22 -- # version=25.1 00:07:16.911 08:18:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:16.911 08:18:29 version -- app/version.sh@28 -- # version=25.1rc0 00:07:16.911 08:18:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:16.911 08:18:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:16.911 08:18:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:16.911 08:18:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:16.911 00:07:16.911 real 0m0.299s 00:07:16.911 user 0m0.167s 00:07:16.911 sys 0m0.176s 00:07:16.911 ************************************ 00:07:16.911 END TEST version 00:07:16.911 ************************************ 00:07:16.911 08:18:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.911 08:18:29 version -- common/autotest_common.sh@10 -- # set +x 00:07:16.911 08:18:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:16.911 08:18:29 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:16.911 08:18:29 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:16.911 08:18:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.911 08:18:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.911 08:18:29 -- common/autotest_common.sh@10 -- # set +x 00:07:16.911 ************************************ 00:07:16.911 START TEST bdev_raid 00:07:16.911 ************************************ 00:07:16.911 08:18:29 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:17.171 * Looking for test storage... 00:07:17.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:17.171 08:18:29 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:17.171 08:18:29 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:07:17.171 08:18:29 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:17.171 08:18:29 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:17.171 08:18:29 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.172 08:18:29 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:17.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.172 --rc genhtml_branch_coverage=1 00:07:17.172 --rc genhtml_function_coverage=1 00:07:17.172 --rc genhtml_legend=1 00:07:17.172 --rc geninfo_all_blocks=1 00:07:17.172 --rc geninfo_unexecuted_blocks=1 00:07:17.172 00:07:17.172 ' 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:17.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.172 --rc genhtml_branch_coverage=1 00:07:17.172 --rc genhtml_function_coverage=1 00:07:17.172 --rc genhtml_legend=1 00:07:17.172 --rc geninfo_all_blocks=1 00:07:17.172 --rc geninfo_unexecuted_blocks=1 00:07:17.172 00:07:17.172 ' 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:17.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.172 --rc genhtml_branch_coverage=1 00:07:17.172 --rc genhtml_function_coverage=1 00:07:17.172 --rc genhtml_legend=1 00:07:17.172 --rc geninfo_all_blocks=1 00:07:17.172 --rc geninfo_unexecuted_blocks=1 00:07:17.172 00:07:17.172 ' 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:17.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.172 --rc genhtml_branch_coverage=1 00:07:17.172 --rc genhtml_function_coverage=1 00:07:17.172 --rc genhtml_legend=1 00:07:17.172 --rc geninfo_all_blocks=1 00:07:17.172 --rc geninfo_unexecuted_blocks=1 00:07:17.172 00:07:17.172 ' 00:07:17.172 08:18:29 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:17.172 08:18:29 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:17.172 08:18:29 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:17.172 08:18:29 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:17.172 08:18:29 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:17.172 08:18:29 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:17.172 08:18:29 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.172 08:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.172 ************************************ 00:07:17.172 START TEST raid1_resize_data_offset_test 00:07:17.172 ************************************ 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60194 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60194' 00:07:17.172 Process raid pid: 60194 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60194 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60194 ']' 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.172 08:18:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.172 [2024-12-13 08:18:29.504345] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:17.172 [2024-12-13 08:18:29.504549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.432 [2024-12-13 08:18:29.668149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.432 [2024-12-13 08:18:29.789991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.692 [2024-12-13 08:18:30.011204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.693 [2024-12-13 08:18:30.011254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 malloc0 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 malloc1 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 null0 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 [2024-12-13 08:18:30.588412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:18.263 [2024-12-13 08:18:30.590461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:18.263 [2024-12-13 08:18:30.590518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:18.263 [2024-12-13 08:18:30.590688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:18.263 [2024-12-13 08:18:30.590702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:18.263 [2024-12-13 08:18:30.591022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:18.263 [2024-12-13 08:18:30.591252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:18.263 [2024-12-13 08:18:30.591267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:18.263 [2024-12-13 08:18:30.591465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.523 [2024-12-13 08:18:30.648361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.523 08:18:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.093 malloc2 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.093 [2024-12-13 08:18:31.216936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:19.093 [2024-12-13 08:18:31.233925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.093 [2024-12-13 08:18:31.235818] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60194 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60194 ']' 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60194 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60194 00:07:19.093 killing process with pid 60194 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60194' 00:07:19.093 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60194 00:07:19.094 [2024-12-13 08:18:31.330886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.094 08:18:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60194 00:07:19.094 [2024-12-13 08:18:31.332021] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:19.094 [2024-12-13 08:18:31.332089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.094 [2024-12-13 08:18:31.332117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:19.094 [2024-12-13 08:18:31.368389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.094 [2024-12-13 08:18:31.368723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.094 [2024-12-13 08:18:31.368741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:21.000 [2024-12-13 08:18:33.300471] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.380 08:18:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:22.380 00:07:22.380 real 0m5.109s 00:07:22.380 user 0m5.080s 00:07:22.380 sys 0m0.535s 00:07:22.380 ************************************ 00:07:22.380 END TEST raid1_resize_data_offset_test 00:07:22.380 ************************************ 00:07:22.380 08:18:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.380 08:18:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.380 08:18:34 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:22.380 08:18:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.380 08:18:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.380 08:18:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.380 ************************************ 00:07:22.380 START TEST raid0_resize_superblock_test 00:07:22.380 ************************************ 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60283 00:07:22.380 Process raid pid: 60283 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60283' 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60283 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60283 ']' 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.380 08:18:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.380 [2024-12-13 08:18:34.679226] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:22.380 [2024-12-13 08:18:34.679456] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.640 [2024-12-13 08:18:34.859278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.640 [2024-12-13 08:18:34.988033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.897 [2024-12-13 08:18:35.206674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.897 [2024-12-13 08:18:35.206712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.462 08:18:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.462 08:18:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.462 08:18:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:23.462 08:18:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.462 08:18:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 malloc0 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 [2024-12-13 08:18:36.125121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:24.030 [2024-12-13 08:18:36.125189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.030 [2024-12-13 08:18:36.125210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.030 [2024-12-13 08:18:36.125223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.030 [2024-12-13 08:18:36.127446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.030 [2024-12-13 08:18:36.127489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:24.030 pt0 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 56d724c9-c37f-40a9-bde9-74222752748f 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 7775b912-bdce-4b0e-b974-de2f6e857b99 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 e3e54193-3283-4de4-98b8-6d40b9b77192 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 [2024-12-13 08:18:36.259683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7775b912-bdce-4b0e-b974-de2f6e857b99 is claimed 00:07:24.030 [2024-12-13 08:18:36.259829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e3e54193-3283-4de4-98b8-6d40b9b77192 is claimed 00:07:24.030 [2024-12-13 08:18:36.260007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.030 [2024-12-13 08:18:36.260027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:24.030 [2024-12-13 08:18:36.260376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.030 [2024-12-13 08:18:36.260603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.030 [2024-12-13 08:18:36.260623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:24.030 [2024-12-13 08:18:36.260832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.030 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.030 [2024-12-13 08:18:36.375776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.289 [2024-12-13 08:18:36.419640] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:24.289 [2024-12-13 08:18:36.419751] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7775b912-bdce-4b0e-b974-de2f6e857b99' was resized: old size 131072, new size 204800 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.289 [2024-12-13 08:18:36.431568] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:24.289 [2024-12-13 08:18:36.431690] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e3e54193-3283-4de4-98b8-6d40b9b77192' was resized: old size 131072, new size 204800 00:07:24.289 [2024-12-13 08:18:36.431770] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.289 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.290 [2024-12-13 08:18:36.547422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.290 [2024-12-13 08:18:36.587185] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:24.290 [2024-12-13 08:18:36.587324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:24.290 [2024-12-13 08:18:36.587376] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.290 [2024-12-13 08:18:36.587430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:24.290 [2024-12-13 08:18:36.587618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.290 [2024-12-13 08:18:36.587702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.290 [2024-12-13 08:18:36.587760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.290 [2024-12-13 08:18:36.599011] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:24.290 [2024-12-13 08:18:36.599140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.290 [2024-12-13 08:18:36.599179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:24.290 [2024-12-13 08:18:36.599209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.290 [2024-12-13 08:18:36.601606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.290 [2024-12-13 08:18:36.601686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:24.290 [2024-12-13 08:18:36.603694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7775b912-bdce-4b0e-b974-de2f6e857b99 00:07:24.290 [2024-12-13 08:18:36.603826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7775b912-bdce-4b0e-b974-de2f6e857b99 is claimed 00:07:24.290 [2024-12-13 08:18:36.603989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e3e54193-3283-4de4-98b8-6d40b9b77192 00:07:24.290 [2024-12-13 08:18:36.604053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e3e54193-3283-4de4-98b8-6d40b9b77192 is claimed 00:07:24.290 [2024-12-13 08:18:36.604318] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e3e54193-3283-4dpt0 00:07:24.290 e4-98b8-6d40b9b77192 (2) smaller than existing raid bdev Raid (3) 00:07:24.290 [2024-12-13 08:18:36.604379] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7775b912-bdce-4b0e-b974-de2f6e857b99: File exists 00:07:24.290 [2024-12-13 08:18:36.604421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:24.290 [2024-12-13 08:18:36.604451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:24.290 [2024-12-13 08:18:36.604729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:24.290 [2024-12-13 08:18:36.604912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:24.290 [2024-12-13 08:18:36.604922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:24.290 [2024-12-13 08:18:36.605111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.290 [2024-12-13 08:18:36.628019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.290 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60283 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60283 ']' 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60283 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60283 00:07:24.549 killing process with pid 60283 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60283' 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60283 00:07:24.549 [2024-12-13 08:18:36.710511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.549 [2024-12-13 08:18:36.710606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.549 [2024-12-13 08:18:36.710658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.549 [2024-12-13 08:18:36.710668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:24.549 08:18:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60283 00:07:25.928 [2024-12-13 08:18:38.161414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.346 ************************************ 00:07:27.346 END TEST raid0_resize_superblock_test 00:07:27.346 ************************************ 00:07:27.346 08:18:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:27.346 00:07:27.346 real 0m4.713s 00:07:27.346 user 0m4.969s 00:07:27.346 sys 0m0.564s 00:07:27.346 08:18:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.346 08:18:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.346 08:18:39 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:27.346 08:18:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.346 08:18:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.346 08:18:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.346 ************************************ 00:07:27.346 START TEST raid1_resize_superblock_test 00:07:27.346 ************************************ 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60387 00:07:27.346 Process raid pid: 60387 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60387' 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60387 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60387 ']' 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.346 08:18:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.346 [2024-12-13 08:18:39.446883] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:27.346 [2024-12-13 08:18:39.447514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.346 [2024-12-13 08:18:39.619134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.605 [2024-12-13 08:18:39.742603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.605 [2024-12-13 08:18:39.959377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.605 [2024-12-13 08:18:39.959428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.173 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.173 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:28.173 08:18:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:28.173 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.173 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.741 malloc0 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 [2024-12-13 08:18:40.871665] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:28.742 [2024-12-13 08:18:40.871741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.742 [2024-12-13 08:18:40.871765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:28.742 [2024-12-13 08:18:40.871779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.742 [2024-12-13 08:18:40.873998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.742 [2024-12-13 08:18:40.874046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:28.742 pt0 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 5072ad23-cf61-4c9d-962c-4a83ce2a7abe 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 72454ab4-b898-4c37-818a-15444e2fbaf2 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 8466979d-eba7-445a-bd54-c878ea0ec570 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 [2024-12-13 08:18:40.998674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 72454ab4-b898-4c37-818a-15444e2fbaf2 is claimed 00:07:28.742 [2024-12-13 08:18:40.998811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8466979d-eba7-445a-bd54-c878ea0ec570 is claimed 00:07:28.742 [2024-12-13 08:18:40.998986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:28.742 [2024-12-13 08:18:40.999003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:28.742 [2024-12-13 08:18:40.999383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:28.742 [2024-12-13 08:18:40.999614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:28.742 [2024-12-13 08:18:40.999634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:28.742 [2024-12-13 08:18:40.999843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.742 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.001 [2024-12-13 08:18:41.106736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.001 [2024-12-13 08:18:41.134694] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:29.001 [2024-12-13 08:18:41.134740] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '72454ab4-b898-4c37-818a-15444e2fbaf2' was resized: old size 131072, new size 204800 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.001 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.001 [2024-12-13 08:18:41.142621] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:29.001 [2024-12-13 08:18:41.142658] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8466979d-eba7-445a-bd54-c878ea0ec570' was resized: old size 131072, new size 204800 00:07:29.001 [2024-12-13 08:18:41.142692] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:29.002 [2024-12-13 08:18:41.234511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 [2024-12-13 08:18:41.282253] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:29.002 [2024-12-13 08:18:41.282349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:29.002 [2024-12-13 08:18:41.282375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:29.002 [2024-12-13 08:18:41.282554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.002 [2024-12-13 08:18:41.282778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.002 [2024-12-13 08:18:41.282860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.002 [2024-12-13 08:18:41.282876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 [2024-12-13 08:18:41.294162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:29.002 [2024-12-13 08:18:41.294342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.002 [2024-12-13 08:18:41.294410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:29.002 [2024-12-13 08:18:41.294447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.002 [2024-12-13 08:18:41.296933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.002 [2024-12-13 08:18:41.297067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:29.002 pt0 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 [2024-12-13 08:18:41.299237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 72454ab4-b898-4c37-818a-15444e2fbaf2 00:07:29.002 [2024-12-13 08:18:41.299331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 72454ab4-b898-4c37-818a-15444e2fbaf2 is claimed 00:07:29.002 [2024-12-13 08:18:41.299455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8466979d-eba7-445a-bd54-c878ea0ec570 00:07:29.002 [2024-12-13 08:18:41.299558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8466979d-eba7-445a-bd54-c878ea0ec570 is claimed 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:29.002 [2024-12-13 08:18:41.299755] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8466979d-eba7-445a-bd54-c878ea0ec570 (2) smaller than existing raid bdev Raid (3) 00:07:29.002 [2024-12-13 08:18:41.299795] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 72454ab4-b898-4c37-818a-15444e2fbaf2: File exists 00:07:29.002 [2024-12-13 08:18:41.299843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:29.002 [2024-12-13 08:18:41.299855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:29.002 [2024-12-13 08:18:41.300175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.002 [2024-12-13 08:18:41.300424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:29.002 [2024-12-13 08:18:41.300476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 [2024-12-13 08:18:41.300706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 [2024-12-13 08:18:41.322336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60387 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60387 ']' 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60387 00:07:29.002 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:29.260 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.260 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60387 00:07:29.260 killing process with pid 60387 00:07:29.260 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.260 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.260 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60387' 00:07:29.260 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60387 00:07:29.260 08:18:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60387 00:07:29.260 [2024-12-13 08:18:41.399521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.260 [2024-12-13 08:18:41.399621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.260 [2024-12-13 08:18:41.399685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.260 [2024-12-13 08:18:41.399743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:30.659 [2024-12-13 08:18:42.871598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.043 08:18:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:32.043 00:07:32.043 real 0m4.653s 00:07:32.043 user 0m4.838s 00:07:32.043 sys 0m0.558s 00:07:32.043 ************************************ 00:07:32.043 END TEST raid1_resize_superblock_test 00:07:32.043 ************************************ 00:07:32.043 08:18:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.043 08:18:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.043 08:18:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:32.043 08:18:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:32.043 08:18:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:32.043 08:18:44 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:32.043 08:18:44 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:32.043 08:18:44 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:32.043 08:18:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.043 08:18:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.043 08:18:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.043 ************************************ 00:07:32.043 START TEST raid_function_test_raid0 00:07:32.043 ************************************ 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60484 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60484' 00:07:32.043 Process raid pid: 60484 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60484 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60484 ']' 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.043 08:18:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.043 [2024-12-13 08:18:44.183187] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:32.043 [2024-12-13 08:18:44.183418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.043 [2024-12-13 08:18:44.341239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.302 [2024-12-13 08:18:44.464659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.561 [2024-12-13 08:18:44.681842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.561 [2024-12-13 08:18:44.681977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.820 Base_1 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.820 Base_2 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.820 [2024-12-13 08:18:45.120992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:32.820 [2024-12-13 08:18:45.122797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:32.820 [2024-12-13 08:18:45.122869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.820 [2024-12-13 08:18:45.122881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:32.820 [2024-12-13 08:18:45.123276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.820 [2024-12-13 08:18:45.123490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.820 [2024-12-13 08:18:45.123561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:32.820 [2024-12-13 08:18:45.123780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:32.820 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.821 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:32.821 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:33.079 [2024-12-13 08:18:45.376614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:33.079 /dev/nbd0 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.079 1+0 records in 00:07:33.079 1+0 records out 00:07:33.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372023 s, 11.0 MB/s 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:33.079 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:33.338 { 00:07:33.338 "nbd_device": "/dev/nbd0", 00:07:33.338 "bdev_name": "raid" 00:07:33.338 } 00:07:33.338 ]' 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:33.338 { 00:07:33.338 "nbd_device": "/dev/nbd0", 00:07:33.338 "bdev_name": "raid" 00:07:33.338 } 00:07:33.338 ]' 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.338 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:33.597 4096+0 records in 00:07:33.597 4096+0 records out 00:07:33.597 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0311387 s, 67.3 MB/s 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:33.597 4096+0 records in 00:07:33.597 4096+0 records out 00:07:33.597 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.197625 s, 10.6 MB/s 00:07:33.597 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:33.856 128+0 records in 00:07:33.856 128+0 records out 00:07:33.856 65536 bytes (66 kB, 64 KiB) copied, 0.00122065 s, 53.7 MB/s 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:33.856 08:18:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:33.856 2035+0 records in 00:07:33.856 2035+0 records out 00:07:33.856 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0145822 s, 71.5 MB/s 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:33.856 456+0 records in 00:07:33.856 456+0 records out 00:07:33.856 233472 bytes (233 kB, 228 KiB) copied, 0.00405611 s, 57.6 MB/s 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.856 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.115 [2024-12-13 08:18:46.288120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:34.115 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60484 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60484 ']' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60484 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60484 00:07:34.374 killing process with pid 60484 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60484' 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60484 00:07:34.374 08:18:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60484 00:07:34.374 [2024-12-13 08:18:46.619870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.374 [2024-12-13 08:18:46.619968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.374 [2024-12-13 08:18:46.620056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.374 [2024-12-13 08:18:46.620075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:34.633 [2024-12-13 08:18:46.834805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.013 08:18:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:36.013 00:07:36.013 real 0m3.853s 00:07:36.013 user 0m4.505s 00:07:36.013 sys 0m0.922s 00:07:36.013 08:18:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.013 08:18:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:36.013 ************************************ 00:07:36.013 END TEST raid_function_test_raid0 00:07:36.013 ************************************ 00:07:36.013 08:18:47 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:36.013 08:18:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.013 08:18:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.013 08:18:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.013 ************************************ 00:07:36.013 START TEST raid_function_test_concat 00:07:36.013 ************************************ 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60613 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60613' 00:07:36.013 Process raid pid: 60613 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60613 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60613 ']' 00:07:36.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.013 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.013 [2024-12-13 08:18:48.098435] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:36.013 [2024-12-13 08:18:48.098648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.013 [2024-12-13 08:18:48.273897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.272 [2024-12-13 08:18:48.389517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.272 [2024-12-13 08:18:48.606095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.272 [2024-12-13 08:18:48.606199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.841 Base_1 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.841 08:18:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.841 Base_2 00:07:36.841 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.841 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:36.841 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.841 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.841 [2024-12-13 08:18:49.041010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:36.841 [2024-12-13 08:18:49.042818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:36.841 [2024-12-13 08:18:49.042891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:36.842 [2024-12-13 08:18:49.042903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:36.842 [2024-12-13 08:18:49.043208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:36.842 [2024-12-13 08:18:49.043372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:36.842 [2024-12-13 08:18:49.043386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:36.842 [2024-12-13 08:18:49.043548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:36.842 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:37.101 [2024-12-13 08:18:49.272689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:37.101 /dev/nbd0 00:07:37.101 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.101 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.101 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:37.101 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:37.101 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.101 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.101 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.102 1+0 records in 00:07:37.102 1+0 records out 00:07:37.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495809 s, 8.3 MB/s 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.102 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.362 { 00:07:37.362 "nbd_device": "/dev/nbd0", 00:07:37.362 "bdev_name": "raid" 00:07:37.362 } 00:07:37.362 ]' 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.362 { 00:07:37.362 "nbd_device": "/dev/nbd0", 00:07:37.362 "bdev_name": "raid" 00:07:37.362 } 00:07:37.362 ]' 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:37.362 4096+0 records in 00:07:37.362 4096+0 records out 00:07:37.362 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0280715 s, 74.7 MB/s 00:07:37.362 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:37.621 4096+0 records in 00:07:37.621 4096+0 records out 00:07:37.621 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.210566 s, 10.0 MB/s 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:37.621 128+0 records in 00:07:37.621 128+0 records out 00:07:37.621 65536 bytes (66 kB, 64 KiB) copied, 0.00111933 s, 58.5 MB/s 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:37.621 2035+0 records in 00:07:37.621 2035+0 records out 00:07:37.621 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00663032 s, 157 MB/s 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:37.621 456+0 records in 00:07:37.621 456+0 records out 00:07:37.621 233472 bytes (233 kB, 228 KiB) copied, 0.00160905 s, 145 MB/s 00:07:37.621 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:37.885 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.885 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.885 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.885 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.885 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:37.885 08:18:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:37.885 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.885 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:37.885 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.885 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:37.885 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.885 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:38.163 [2024-12-13 08:18:50.269410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.163 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60613 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60613 ']' 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60613 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60613 00:07:38.427 killing process with pid 60613 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60613' 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60613 00:07:38.427 [2024-12-13 08:18:50.607788] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.427 [2024-12-13 08:18:50.607903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.427 08:18:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60613 00:07:38.427 [2024-12-13 08:18:50.607958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.427 [2024-12-13 08:18:50.607970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:38.686 [2024-12-13 08:18:50.821825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.066 ************************************ 00:07:40.066 END TEST raid_function_test_concat 00:07:40.066 ************************************ 00:07:40.066 08:18:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:40.066 00:07:40.066 real 0m3.983s 00:07:40.066 user 0m4.659s 00:07:40.066 sys 0m0.956s 00:07:40.066 08:18:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.066 08:18:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:40.066 08:18:52 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:40.066 08:18:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.066 08:18:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.066 08:18:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.066 ************************************ 00:07:40.066 START TEST raid0_resize_test 00:07:40.066 ************************************ 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60740 00:07:40.066 Process raid pid: 60740 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60740' 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60740 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60740 ']' 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.066 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.066 [2024-12-13 08:18:52.152616] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:40.066 [2024-12-13 08:18:52.153238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.066 [2024-12-13 08:18:52.308944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.066 [2024-12-13 08:18:52.428501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.325 [2024-12-13 08:18:52.637187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.325 [2024-12-13 08:18:52.637237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.893 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.893 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:40.893 08:18:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:40.893 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.893 08:18:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 Base_1 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 Base_2 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 [2024-12-13 08:18:53.027637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:40.893 [2024-12-13 08:18:53.029489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:40.893 [2024-12-13 08:18:53.029560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:40.893 [2024-12-13 08:18:53.029573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:40.893 [2024-12-13 08:18:53.029890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:40.893 [2024-12-13 08:18:53.030052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:40.893 [2024-12-13 08:18:53.030066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:40.893 [2024-12-13 08:18:53.030281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 [2024-12-13 08:18:53.035626] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.893 [2024-12-13 08:18:53.035662] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:40.893 true 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 [2024-12-13 08:18:53.047814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 [2024-12-13 08:18:53.099530] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.893 [2024-12-13 08:18:53.099572] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:40.893 [2024-12-13 08:18:53.099610] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:40.893 true 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:40.893 [2024-12-13 08:18:53.111699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.893 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60740 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60740 ']' 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60740 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60740 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.894 killing process with pid 60740 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60740' 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60740 00:07:40.894 [2024-12-13 08:18:53.198597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.894 [2024-12-13 08:18:53.198704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.894 [2024-12-13 08:18:53.198756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.894 [2024-12-13 08:18:53.198765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:40.894 08:18:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60740 00:07:40.894 [2024-12-13 08:18:53.216626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.272 08:18:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:42.272 00:07:42.272 real 0m2.293s 00:07:42.272 user 0m2.454s 00:07:42.272 sys 0m0.327s 00:07:42.272 08:18:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.272 08:18:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 ************************************ 00:07:42.272 END TEST raid0_resize_test 00:07:42.272 ************************************ 00:07:42.272 08:18:54 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:42.272 08:18:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.272 08:18:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.272 08:18:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 ************************************ 00:07:42.272 START TEST raid1_resize_test 00:07:42.272 ************************************ 00:07:42.272 08:18:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60796 00:07:42.273 Process raid pid: 60796 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60796' 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60796 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60796 ']' 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.273 08:18:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.273 [2024-12-13 08:18:54.513676] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:42.273 [2024-12-13 08:18:54.513814] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.532 [2024-12-13 08:18:54.687601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.532 [2024-12-13 08:18:54.811896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.791 [2024-12-13 08:18:55.019003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.792 [2024-12-13 08:18:55.019056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.050 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.050 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 Base_1 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 Base_2 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 [2024-12-13 08:18:55.390015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:43.051 [2024-12-13 08:18:55.392010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:43.051 [2024-12-13 08:18:55.392077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.051 [2024-12-13 08:18:55.392089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:43.051 [2024-12-13 08:18:55.392356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.051 [2024-12-13 08:18:55.392476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.051 [2024-12-13 08:18:55.392485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:43.051 [2024-12-13 08:18:55.392638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 [2024-12-13 08:18:55.401981] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.051 [2024-12-13 08:18:55.402015] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:43.051 true 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.051 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.310 [2024-12-13 08:18:55.418122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.310 [2024-12-13 08:18:55.457904] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.310 [2024-12-13 08:18:55.458014] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:43.310 [2024-12-13 08:18:55.458054] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:43.310 true 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.310 [2024-12-13 08:18:55.474018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60796 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60796 ']' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60796 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60796 00:07:43.310 killing process with pid 60796 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60796' 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60796 00:07:43.310 [2024-12-13 08:18:55.557141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.310 08:18:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60796 00:07:43.310 [2024-12-13 08:18:55.557252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.310 [2024-12-13 08:18:55.557756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.310 [2024-12-13 08:18:55.557780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:43.310 [2024-12-13 08:18:55.575470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.710 ************************************ 00:07:44.710 END TEST raid1_resize_test 00:07:44.710 ************************************ 00:07:44.710 08:18:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:44.710 00:07:44.710 real 0m2.287s 00:07:44.710 user 0m2.467s 00:07:44.710 sys 0m0.310s 00:07:44.710 08:18:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.710 08:18:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.710 08:18:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:44.710 08:18:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:44.710 08:18:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:44.710 08:18:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:44.710 08:18:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.710 08:18:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.710 ************************************ 00:07:44.710 START TEST raid_state_function_test 00:07:44.710 ************************************ 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:44.710 Process raid pid: 60854 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60854 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60854' 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60854 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60854 ']' 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.710 08:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.710 [2024-12-13 08:18:56.880786] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:44.710 [2024-12-13 08:18:56.880919] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.710 [2024-12-13 08:18:57.037081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.970 [2024-12-13 08:18:57.153063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.229 [2024-12-13 08:18:57.371620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.229 [2024-12-13 08:18:57.371737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.489 [2024-12-13 08:18:57.721839] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.489 [2024-12-13 08:18:57.721899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.489 [2024-12-13 08:18:57.721909] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.489 [2024-12-13 08:18:57.721918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.489 "name": "Existed_Raid", 00:07:45.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.489 "strip_size_kb": 64, 00:07:45.489 "state": "configuring", 00:07:45.489 "raid_level": "raid0", 00:07:45.489 "superblock": false, 00:07:45.489 "num_base_bdevs": 2, 00:07:45.489 "num_base_bdevs_discovered": 0, 00:07:45.489 "num_base_bdevs_operational": 2, 00:07:45.489 "base_bdevs_list": [ 00:07:45.489 { 00:07:45.489 "name": "BaseBdev1", 00:07:45.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.489 "is_configured": false, 00:07:45.489 "data_offset": 0, 00:07:45.489 "data_size": 0 00:07:45.489 }, 00:07:45.489 { 00:07:45.489 "name": "BaseBdev2", 00:07:45.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.489 "is_configured": false, 00:07:45.489 "data_offset": 0, 00:07:45.489 "data_size": 0 00:07:45.489 } 00:07:45.489 ] 00:07:45.489 }' 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.489 08:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.057 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.058 [2024-12-13 08:18:58.184995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.058 [2024-12-13 08:18:58.185095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.058 [2024-12-13 08:18:58.192969] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.058 [2024-12-13 08:18:58.193053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.058 [2024-12-13 08:18:58.193081] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.058 [2024-12-13 08:18:58.193116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.058 [2024-12-13 08:18:58.236674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.058 BaseBdev1 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.058 [ 00:07:46.058 { 00:07:46.058 "name": "BaseBdev1", 00:07:46.058 "aliases": [ 00:07:46.058 "05a5302f-5a62-4906-8715-e4a9d2d581c5" 00:07:46.058 ], 00:07:46.058 "product_name": "Malloc disk", 00:07:46.058 "block_size": 512, 00:07:46.058 "num_blocks": 65536, 00:07:46.058 "uuid": "05a5302f-5a62-4906-8715-e4a9d2d581c5", 00:07:46.058 "assigned_rate_limits": { 00:07:46.058 "rw_ios_per_sec": 0, 00:07:46.058 "rw_mbytes_per_sec": 0, 00:07:46.058 "r_mbytes_per_sec": 0, 00:07:46.058 "w_mbytes_per_sec": 0 00:07:46.058 }, 00:07:46.058 "claimed": true, 00:07:46.058 "claim_type": "exclusive_write", 00:07:46.058 "zoned": false, 00:07:46.058 "supported_io_types": { 00:07:46.058 "read": true, 00:07:46.058 "write": true, 00:07:46.058 "unmap": true, 00:07:46.058 "flush": true, 00:07:46.058 "reset": true, 00:07:46.058 "nvme_admin": false, 00:07:46.058 "nvme_io": false, 00:07:46.058 "nvme_io_md": false, 00:07:46.058 "write_zeroes": true, 00:07:46.058 "zcopy": true, 00:07:46.058 "get_zone_info": false, 00:07:46.058 "zone_management": false, 00:07:46.058 "zone_append": false, 00:07:46.058 "compare": false, 00:07:46.058 "compare_and_write": false, 00:07:46.058 "abort": true, 00:07:46.058 "seek_hole": false, 00:07:46.058 "seek_data": false, 00:07:46.058 "copy": true, 00:07:46.058 "nvme_iov_md": false 00:07:46.058 }, 00:07:46.058 "memory_domains": [ 00:07:46.058 { 00:07:46.058 "dma_device_id": "system", 00:07:46.058 "dma_device_type": 1 00:07:46.058 }, 00:07:46.058 { 00:07:46.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.058 "dma_device_type": 2 00:07:46.058 } 00:07:46.058 ], 00:07:46.058 "driver_specific": {} 00:07:46.058 } 00:07:46.058 ] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.058 "name": "Existed_Raid", 00:07:46.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.058 "strip_size_kb": 64, 00:07:46.058 "state": "configuring", 00:07:46.058 "raid_level": "raid0", 00:07:46.058 "superblock": false, 00:07:46.058 "num_base_bdevs": 2, 00:07:46.058 "num_base_bdevs_discovered": 1, 00:07:46.058 "num_base_bdevs_operational": 2, 00:07:46.058 "base_bdevs_list": [ 00:07:46.058 { 00:07:46.058 "name": "BaseBdev1", 00:07:46.058 "uuid": "05a5302f-5a62-4906-8715-e4a9d2d581c5", 00:07:46.058 "is_configured": true, 00:07:46.058 "data_offset": 0, 00:07:46.058 "data_size": 65536 00:07:46.058 }, 00:07:46.058 { 00:07:46.058 "name": "BaseBdev2", 00:07:46.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.058 "is_configured": false, 00:07:46.058 "data_offset": 0, 00:07:46.058 "data_size": 0 00:07:46.058 } 00:07:46.058 ] 00:07:46.058 }' 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.058 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.318 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.318 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.318 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.318 [2024-12-13 08:18:58.675991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.318 [2024-12-13 08:18:58.676149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:46.318 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.318 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.318 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.577 [2024-12-13 08:18:58.684048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.577 [2024-12-13 08:18:58.686180] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.577 [2024-12-13 08:18:58.686271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.577 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.578 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.578 "name": "Existed_Raid", 00:07:46.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.578 "strip_size_kb": 64, 00:07:46.578 "state": "configuring", 00:07:46.578 "raid_level": "raid0", 00:07:46.578 "superblock": false, 00:07:46.578 "num_base_bdevs": 2, 00:07:46.578 "num_base_bdevs_discovered": 1, 00:07:46.578 "num_base_bdevs_operational": 2, 00:07:46.578 "base_bdevs_list": [ 00:07:46.578 { 00:07:46.578 "name": "BaseBdev1", 00:07:46.578 "uuid": "05a5302f-5a62-4906-8715-e4a9d2d581c5", 00:07:46.578 "is_configured": true, 00:07:46.578 "data_offset": 0, 00:07:46.578 "data_size": 65536 00:07:46.578 }, 00:07:46.578 { 00:07:46.578 "name": "BaseBdev2", 00:07:46.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.578 "is_configured": false, 00:07:46.578 "data_offset": 0, 00:07:46.578 "data_size": 0 00:07:46.578 } 00:07:46.578 ] 00:07:46.578 }' 00:07:46.578 08:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.578 08:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.837 [2024-12-13 08:18:59.157162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.837 [2024-12-13 08:18:59.157212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.837 [2024-12-13 08:18:59.157221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:46.837 [2024-12-13 08:18:59.157479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.837 [2024-12-13 08:18:59.157672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.837 [2024-12-13 08:18:59.157692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:46.837 [2024-12-13 08:18:59.157991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.837 BaseBdev2 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.837 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.837 [ 00:07:46.837 { 00:07:46.837 "name": "BaseBdev2", 00:07:46.837 "aliases": [ 00:07:46.837 "a4062605-f98e-4b3f-ad5f-61ba030b37ab" 00:07:46.837 ], 00:07:46.837 "product_name": "Malloc disk", 00:07:46.837 "block_size": 512, 00:07:46.837 "num_blocks": 65536, 00:07:46.837 "uuid": "a4062605-f98e-4b3f-ad5f-61ba030b37ab", 00:07:46.837 "assigned_rate_limits": { 00:07:46.837 "rw_ios_per_sec": 0, 00:07:46.837 "rw_mbytes_per_sec": 0, 00:07:46.837 "r_mbytes_per_sec": 0, 00:07:46.837 "w_mbytes_per_sec": 0 00:07:46.837 }, 00:07:46.837 "claimed": true, 00:07:46.837 "claim_type": "exclusive_write", 00:07:46.837 "zoned": false, 00:07:46.837 "supported_io_types": { 00:07:46.837 "read": true, 00:07:46.837 "write": true, 00:07:46.838 "unmap": true, 00:07:46.838 "flush": true, 00:07:46.838 "reset": true, 00:07:46.838 "nvme_admin": false, 00:07:46.838 "nvme_io": false, 00:07:46.838 "nvme_io_md": false, 00:07:46.838 "write_zeroes": true, 00:07:46.838 "zcopy": true, 00:07:46.838 "get_zone_info": false, 00:07:46.838 "zone_management": false, 00:07:46.838 "zone_append": false, 00:07:46.838 "compare": false, 00:07:46.838 "compare_and_write": false, 00:07:46.838 "abort": true, 00:07:46.838 "seek_hole": false, 00:07:46.838 "seek_data": false, 00:07:46.838 "copy": true, 00:07:46.838 "nvme_iov_md": false 00:07:46.838 }, 00:07:46.838 "memory_domains": [ 00:07:46.838 { 00:07:46.838 "dma_device_id": "system", 00:07:46.838 "dma_device_type": 1 00:07:46.838 }, 00:07:46.838 { 00:07:46.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.838 "dma_device_type": 2 00:07:46.838 } 00:07:46.838 ], 00:07:46.838 "driver_specific": {} 00:07:46.838 } 00:07:46.838 ] 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.838 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.097 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.097 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.097 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.097 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.097 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.097 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.097 "name": "Existed_Raid", 00:07:47.097 "uuid": "c2646ba0-b58f-4e1b-bcc8-8ff1c54365fa", 00:07:47.097 "strip_size_kb": 64, 00:07:47.097 "state": "online", 00:07:47.097 "raid_level": "raid0", 00:07:47.097 "superblock": false, 00:07:47.097 "num_base_bdevs": 2, 00:07:47.097 "num_base_bdevs_discovered": 2, 00:07:47.097 "num_base_bdevs_operational": 2, 00:07:47.097 "base_bdevs_list": [ 00:07:47.097 { 00:07:47.097 "name": "BaseBdev1", 00:07:47.097 "uuid": "05a5302f-5a62-4906-8715-e4a9d2d581c5", 00:07:47.097 "is_configured": true, 00:07:47.097 "data_offset": 0, 00:07:47.098 "data_size": 65536 00:07:47.098 }, 00:07:47.098 { 00:07:47.098 "name": "BaseBdev2", 00:07:47.098 "uuid": "a4062605-f98e-4b3f-ad5f-61ba030b37ab", 00:07:47.098 "is_configured": true, 00:07:47.098 "data_offset": 0, 00:07:47.098 "data_size": 65536 00:07:47.098 } 00:07:47.098 ] 00:07:47.098 }' 00:07:47.098 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.098 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.358 [2024-12-13 08:18:59.648650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.358 "name": "Existed_Raid", 00:07:47.358 "aliases": [ 00:07:47.358 "c2646ba0-b58f-4e1b-bcc8-8ff1c54365fa" 00:07:47.358 ], 00:07:47.358 "product_name": "Raid Volume", 00:07:47.358 "block_size": 512, 00:07:47.358 "num_blocks": 131072, 00:07:47.358 "uuid": "c2646ba0-b58f-4e1b-bcc8-8ff1c54365fa", 00:07:47.358 "assigned_rate_limits": { 00:07:47.358 "rw_ios_per_sec": 0, 00:07:47.358 "rw_mbytes_per_sec": 0, 00:07:47.358 "r_mbytes_per_sec": 0, 00:07:47.358 "w_mbytes_per_sec": 0 00:07:47.358 }, 00:07:47.358 "claimed": false, 00:07:47.358 "zoned": false, 00:07:47.358 "supported_io_types": { 00:07:47.358 "read": true, 00:07:47.358 "write": true, 00:07:47.358 "unmap": true, 00:07:47.358 "flush": true, 00:07:47.358 "reset": true, 00:07:47.358 "nvme_admin": false, 00:07:47.358 "nvme_io": false, 00:07:47.358 "nvme_io_md": false, 00:07:47.358 "write_zeroes": true, 00:07:47.358 "zcopy": false, 00:07:47.358 "get_zone_info": false, 00:07:47.358 "zone_management": false, 00:07:47.358 "zone_append": false, 00:07:47.358 "compare": false, 00:07:47.358 "compare_and_write": false, 00:07:47.358 "abort": false, 00:07:47.358 "seek_hole": false, 00:07:47.358 "seek_data": false, 00:07:47.358 "copy": false, 00:07:47.358 "nvme_iov_md": false 00:07:47.358 }, 00:07:47.358 "memory_domains": [ 00:07:47.358 { 00:07:47.358 "dma_device_id": "system", 00:07:47.358 "dma_device_type": 1 00:07:47.358 }, 00:07:47.358 { 00:07:47.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.358 "dma_device_type": 2 00:07:47.358 }, 00:07:47.358 { 00:07:47.358 "dma_device_id": "system", 00:07:47.358 "dma_device_type": 1 00:07:47.358 }, 00:07:47.358 { 00:07:47.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.358 "dma_device_type": 2 00:07:47.358 } 00:07:47.358 ], 00:07:47.358 "driver_specific": { 00:07:47.358 "raid": { 00:07:47.358 "uuid": "c2646ba0-b58f-4e1b-bcc8-8ff1c54365fa", 00:07:47.358 "strip_size_kb": 64, 00:07:47.358 "state": "online", 00:07:47.358 "raid_level": "raid0", 00:07:47.358 "superblock": false, 00:07:47.358 "num_base_bdevs": 2, 00:07:47.358 "num_base_bdevs_discovered": 2, 00:07:47.358 "num_base_bdevs_operational": 2, 00:07:47.358 "base_bdevs_list": [ 00:07:47.358 { 00:07:47.358 "name": "BaseBdev1", 00:07:47.358 "uuid": "05a5302f-5a62-4906-8715-e4a9d2d581c5", 00:07:47.358 "is_configured": true, 00:07:47.358 "data_offset": 0, 00:07:47.358 "data_size": 65536 00:07:47.358 }, 00:07:47.358 { 00:07:47.358 "name": "BaseBdev2", 00:07:47.358 "uuid": "a4062605-f98e-4b3f-ad5f-61ba030b37ab", 00:07:47.358 "is_configured": true, 00:07:47.358 "data_offset": 0, 00:07:47.358 "data_size": 65536 00:07:47.358 } 00:07:47.358 ] 00:07:47.358 } 00:07:47.358 } 00:07:47.358 }' 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.358 BaseBdev2' 00:07:47.358 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.618 [2024-12-13 08:18:59.860040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.618 [2024-12-13 08:18:59.860075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.618 [2024-12-13 08:18:59.860162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.618 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.878 08:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.878 08:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.878 "name": "Existed_Raid", 00:07:47.878 "uuid": "c2646ba0-b58f-4e1b-bcc8-8ff1c54365fa", 00:07:47.878 "strip_size_kb": 64, 00:07:47.878 "state": "offline", 00:07:47.878 "raid_level": "raid0", 00:07:47.878 "superblock": false, 00:07:47.878 "num_base_bdevs": 2, 00:07:47.878 "num_base_bdevs_discovered": 1, 00:07:47.878 "num_base_bdevs_operational": 1, 00:07:47.878 "base_bdevs_list": [ 00:07:47.878 { 00:07:47.878 "name": null, 00:07:47.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.878 "is_configured": false, 00:07:47.878 "data_offset": 0, 00:07:47.878 "data_size": 65536 00:07:47.878 }, 00:07:47.878 { 00:07:47.878 "name": "BaseBdev2", 00:07:47.878 "uuid": "a4062605-f98e-4b3f-ad5f-61ba030b37ab", 00:07:47.878 "is_configured": true, 00:07:47.878 "data_offset": 0, 00:07:47.878 "data_size": 65536 00:07:47.878 } 00:07:47.878 ] 00:07:47.878 }' 00:07:47.878 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.878 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.137 [2024-12-13 08:19:00.394584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.137 [2024-12-13 08:19:00.394639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.137 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60854 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60854 ']' 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60854 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60854 00:07:48.397 killing process with pid 60854 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60854' 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60854 00:07:48.397 [2024-12-13 08:19:00.586372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.397 08:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60854 00:07:48.397 [2024-12-13 08:19:00.605680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.777 00:07:49.777 real 0m4.957s 00:07:49.777 user 0m7.091s 00:07:49.777 sys 0m0.808s 00:07:49.777 ************************************ 00:07:49.777 END TEST raid_state_function_test 00:07:49.777 ************************************ 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.777 08:19:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:49.777 08:19:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.777 08:19:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.777 08:19:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.777 ************************************ 00:07:49.777 START TEST raid_state_function_test_sb 00:07:49.777 ************************************ 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61106 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61106' 00:07:49.777 Process raid pid: 61106 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61106 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61106 ']' 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.777 08:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.777 [2024-12-13 08:19:01.899460] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:49.777 [2024-12-13 08:19:01.899660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.777 [2024-12-13 08:19:02.071147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.036 [2024-12-13 08:19:02.193072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.036 [2024-12-13 08:19:02.395267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.036 [2024-12-13 08:19:02.395392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.607 [2024-12-13 08:19:02.744002] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.607 [2024-12-13 08:19:02.744058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.607 [2024-12-13 08:19:02.744070] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.607 [2024-12-13 08:19:02.744080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.607 "name": "Existed_Raid", 00:07:50.607 "uuid": "55bef98c-3a50-4a2d-a5e0-154e445bb418", 00:07:50.607 "strip_size_kb": 64, 00:07:50.607 "state": "configuring", 00:07:50.607 "raid_level": "raid0", 00:07:50.607 "superblock": true, 00:07:50.607 "num_base_bdevs": 2, 00:07:50.607 "num_base_bdevs_discovered": 0, 00:07:50.607 "num_base_bdevs_operational": 2, 00:07:50.607 "base_bdevs_list": [ 00:07:50.607 { 00:07:50.607 "name": "BaseBdev1", 00:07:50.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.607 "is_configured": false, 00:07:50.607 "data_offset": 0, 00:07:50.607 "data_size": 0 00:07:50.607 }, 00:07:50.607 { 00:07:50.607 "name": "BaseBdev2", 00:07:50.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.607 "is_configured": false, 00:07:50.607 "data_offset": 0, 00:07:50.607 "data_size": 0 00:07:50.607 } 00:07:50.607 ] 00:07:50.607 }' 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.607 08:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.866 [2024-12-13 08:19:03.195208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.866 [2024-12-13 08:19:03.195304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.866 [2024-12-13 08:19:03.207192] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.866 [2024-12-13 08:19:03.207286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.866 [2024-12-13 08:19:03.207314] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.866 [2024-12-13 08:19:03.207341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.866 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.126 [2024-12-13 08:19:03.254715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.126 BaseBdev1 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.126 [ 00:07:51.126 { 00:07:51.126 "name": "BaseBdev1", 00:07:51.126 "aliases": [ 00:07:51.126 "15ed9b9f-35b9-436e-99f7-59adc1dc5751" 00:07:51.126 ], 00:07:51.126 "product_name": "Malloc disk", 00:07:51.126 "block_size": 512, 00:07:51.126 "num_blocks": 65536, 00:07:51.126 "uuid": "15ed9b9f-35b9-436e-99f7-59adc1dc5751", 00:07:51.126 "assigned_rate_limits": { 00:07:51.126 "rw_ios_per_sec": 0, 00:07:51.126 "rw_mbytes_per_sec": 0, 00:07:51.126 "r_mbytes_per_sec": 0, 00:07:51.126 "w_mbytes_per_sec": 0 00:07:51.126 }, 00:07:51.126 "claimed": true, 00:07:51.126 "claim_type": "exclusive_write", 00:07:51.126 "zoned": false, 00:07:51.126 "supported_io_types": { 00:07:51.126 "read": true, 00:07:51.126 "write": true, 00:07:51.126 "unmap": true, 00:07:51.126 "flush": true, 00:07:51.126 "reset": true, 00:07:51.126 "nvme_admin": false, 00:07:51.126 "nvme_io": false, 00:07:51.126 "nvme_io_md": false, 00:07:51.126 "write_zeroes": true, 00:07:51.126 "zcopy": true, 00:07:51.126 "get_zone_info": false, 00:07:51.126 "zone_management": false, 00:07:51.126 "zone_append": false, 00:07:51.126 "compare": false, 00:07:51.126 "compare_and_write": false, 00:07:51.126 "abort": true, 00:07:51.126 "seek_hole": false, 00:07:51.126 "seek_data": false, 00:07:51.126 "copy": true, 00:07:51.126 "nvme_iov_md": false 00:07:51.126 }, 00:07:51.126 "memory_domains": [ 00:07:51.126 { 00:07:51.126 "dma_device_id": "system", 00:07:51.126 "dma_device_type": 1 00:07:51.126 }, 00:07:51.126 { 00:07:51.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.126 "dma_device_type": 2 00:07:51.126 } 00:07:51.126 ], 00:07:51.126 "driver_specific": {} 00:07:51.126 } 00:07:51.126 ] 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.126 "name": "Existed_Raid", 00:07:51.126 "uuid": "b5d659cd-f778-4fa0-8c2e-fef87fdd74bc", 00:07:51.126 "strip_size_kb": 64, 00:07:51.126 "state": "configuring", 00:07:51.126 "raid_level": "raid0", 00:07:51.126 "superblock": true, 00:07:51.126 "num_base_bdevs": 2, 00:07:51.126 "num_base_bdevs_discovered": 1, 00:07:51.126 "num_base_bdevs_operational": 2, 00:07:51.126 "base_bdevs_list": [ 00:07:51.126 { 00:07:51.126 "name": "BaseBdev1", 00:07:51.126 "uuid": "15ed9b9f-35b9-436e-99f7-59adc1dc5751", 00:07:51.126 "is_configured": true, 00:07:51.126 "data_offset": 2048, 00:07:51.126 "data_size": 63488 00:07:51.126 }, 00:07:51.126 { 00:07:51.126 "name": "BaseBdev2", 00:07:51.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.126 "is_configured": false, 00:07:51.126 "data_offset": 0, 00:07:51.126 "data_size": 0 00:07:51.126 } 00:07:51.126 ] 00:07:51.126 }' 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.126 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.386 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.386 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.386 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.645 [2024-12-13 08:19:03.753948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.645 [2024-12-13 08:19:03.754072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.645 [2024-12-13 08:19:03.765995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.645 [2024-12-13 08:19:03.768212] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.645 [2024-12-13 08:19:03.768304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.645 "name": "Existed_Raid", 00:07:51.645 "uuid": "7465e9ba-6660-4a33-96cc-c2e14b4f1fc6", 00:07:51.645 "strip_size_kb": 64, 00:07:51.645 "state": "configuring", 00:07:51.645 "raid_level": "raid0", 00:07:51.645 "superblock": true, 00:07:51.645 "num_base_bdevs": 2, 00:07:51.645 "num_base_bdevs_discovered": 1, 00:07:51.645 "num_base_bdevs_operational": 2, 00:07:51.645 "base_bdevs_list": [ 00:07:51.645 { 00:07:51.645 "name": "BaseBdev1", 00:07:51.645 "uuid": "15ed9b9f-35b9-436e-99f7-59adc1dc5751", 00:07:51.645 "is_configured": true, 00:07:51.645 "data_offset": 2048, 00:07:51.645 "data_size": 63488 00:07:51.645 }, 00:07:51.645 { 00:07:51.645 "name": "BaseBdev2", 00:07:51.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.645 "is_configured": false, 00:07:51.645 "data_offset": 0, 00:07:51.645 "data_size": 0 00:07:51.645 } 00:07:51.645 ] 00:07:51.645 }' 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.645 08:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.905 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.905 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.905 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.164 [2024-12-13 08:19:04.272034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.164 [2024-12-13 08:19:04.272441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.164 [2024-12-13 08:19:04.272498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.164 [2024-12-13 08:19:04.272870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.164 BaseBdev2 00:07:52.164 [2024-12-13 08:19:04.273098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.164 [2024-12-13 08:19:04.273132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.164 [2024-12-13 08:19:04.273276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.164 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.164 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.164 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.164 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.164 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.164 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.165 [ 00:07:52.165 { 00:07:52.165 "name": "BaseBdev2", 00:07:52.165 "aliases": [ 00:07:52.165 "9d82e27a-3ec0-4986-89e0-c0d71d37c00f" 00:07:52.165 ], 00:07:52.165 "product_name": "Malloc disk", 00:07:52.165 "block_size": 512, 00:07:52.165 "num_blocks": 65536, 00:07:52.165 "uuid": "9d82e27a-3ec0-4986-89e0-c0d71d37c00f", 00:07:52.165 "assigned_rate_limits": { 00:07:52.165 "rw_ios_per_sec": 0, 00:07:52.165 "rw_mbytes_per_sec": 0, 00:07:52.165 "r_mbytes_per_sec": 0, 00:07:52.165 "w_mbytes_per_sec": 0 00:07:52.165 }, 00:07:52.165 "claimed": true, 00:07:52.165 "claim_type": "exclusive_write", 00:07:52.165 "zoned": false, 00:07:52.165 "supported_io_types": { 00:07:52.165 "read": true, 00:07:52.165 "write": true, 00:07:52.165 "unmap": true, 00:07:52.165 "flush": true, 00:07:52.165 "reset": true, 00:07:52.165 "nvme_admin": false, 00:07:52.165 "nvme_io": false, 00:07:52.165 "nvme_io_md": false, 00:07:52.165 "write_zeroes": true, 00:07:52.165 "zcopy": true, 00:07:52.165 "get_zone_info": false, 00:07:52.165 "zone_management": false, 00:07:52.165 "zone_append": false, 00:07:52.165 "compare": false, 00:07:52.165 "compare_and_write": false, 00:07:52.165 "abort": true, 00:07:52.165 "seek_hole": false, 00:07:52.165 "seek_data": false, 00:07:52.165 "copy": true, 00:07:52.165 "nvme_iov_md": false 00:07:52.165 }, 00:07:52.165 "memory_domains": [ 00:07:52.165 { 00:07:52.165 "dma_device_id": "system", 00:07:52.165 "dma_device_type": 1 00:07:52.165 }, 00:07:52.165 { 00:07:52.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.165 "dma_device_type": 2 00:07:52.165 } 00:07:52.165 ], 00:07:52.165 "driver_specific": {} 00:07:52.165 } 00:07:52.165 ] 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.165 "name": "Existed_Raid", 00:07:52.165 "uuid": "7465e9ba-6660-4a33-96cc-c2e14b4f1fc6", 00:07:52.165 "strip_size_kb": 64, 00:07:52.165 "state": "online", 00:07:52.165 "raid_level": "raid0", 00:07:52.165 "superblock": true, 00:07:52.165 "num_base_bdevs": 2, 00:07:52.165 "num_base_bdevs_discovered": 2, 00:07:52.165 "num_base_bdevs_operational": 2, 00:07:52.165 "base_bdevs_list": [ 00:07:52.165 { 00:07:52.165 "name": "BaseBdev1", 00:07:52.165 "uuid": "15ed9b9f-35b9-436e-99f7-59adc1dc5751", 00:07:52.165 "is_configured": true, 00:07:52.165 "data_offset": 2048, 00:07:52.165 "data_size": 63488 00:07:52.165 }, 00:07:52.165 { 00:07:52.165 "name": "BaseBdev2", 00:07:52.165 "uuid": "9d82e27a-3ec0-4986-89e0-c0d71d37c00f", 00:07:52.165 "is_configured": true, 00:07:52.165 "data_offset": 2048, 00:07:52.165 "data_size": 63488 00:07:52.165 } 00:07:52.165 ] 00:07:52.165 }' 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.165 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.424 [2024-12-13 08:19:04.735612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.424 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.425 "name": "Existed_Raid", 00:07:52.425 "aliases": [ 00:07:52.425 "7465e9ba-6660-4a33-96cc-c2e14b4f1fc6" 00:07:52.425 ], 00:07:52.425 "product_name": "Raid Volume", 00:07:52.425 "block_size": 512, 00:07:52.425 "num_blocks": 126976, 00:07:52.425 "uuid": "7465e9ba-6660-4a33-96cc-c2e14b4f1fc6", 00:07:52.425 "assigned_rate_limits": { 00:07:52.425 "rw_ios_per_sec": 0, 00:07:52.425 "rw_mbytes_per_sec": 0, 00:07:52.425 "r_mbytes_per_sec": 0, 00:07:52.425 "w_mbytes_per_sec": 0 00:07:52.425 }, 00:07:52.425 "claimed": false, 00:07:52.425 "zoned": false, 00:07:52.425 "supported_io_types": { 00:07:52.425 "read": true, 00:07:52.425 "write": true, 00:07:52.425 "unmap": true, 00:07:52.425 "flush": true, 00:07:52.425 "reset": true, 00:07:52.425 "nvme_admin": false, 00:07:52.425 "nvme_io": false, 00:07:52.425 "nvme_io_md": false, 00:07:52.425 "write_zeroes": true, 00:07:52.425 "zcopy": false, 00:07:52.425 "get_zone_info": false, 00:07:52.425 "zone_management": false, 00:07:52.425 "zone_append": false, 00:07:52.425 "compare": false, 00:07:52.425 "compare_and_write": false, 00:07:52.425 "abort": false, 00:07:52.425 "seek_hole": false, 00:07:52.425 "seek_data": false, 00:07:52.425 "copy": false, 00:07:52.425 "nvme_iov_md": false 00:07:52.425 }, 00:07:52.425 "memory_domains": [ 00:07:52.425 { 00:07:52.425 "dma_device_id": "system", 00:07:52.425 "dma_device_type": 1 00:07:52.425 }, 00:07:52.425 { 00:07:52.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.425 "dma_device_type": 2 00:07:52.425 }, 00:07:52.425 { 00:07:52.425 "dma_device_id": "system", 00:07:52.425 "dma_device_type": 1 00:07:52.425 }, 00:07:52.425 { 00:07:52.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.425 "dma_device_type": 2 00:07:52.425 } 00:07:52.425 ], 00:07:52.425 "driver_specific": { 00:07:52.425 "raid": { 00:07:52.425 "uuid": "7465e9ba-6660-4a33-96cc-c2e14b4f1fc6", 00:07:52.425 "strip_size_kb": 64, 00:07:52.425 "state": "online", 00:07:52.425 "raid_level": "raid0", 00:07:52.425 "superblock": true, 00:07:52.425 "num_base_bdevs": 2, 00:07:52.425 "num_base_bdevs_discovered": 2, 00:07:52.425 "num_base_bdevs_operational": 2, 00:07:52.425 "base_bdevs_list": [ 00:07:52.425 { 00:07:52.425 "name": "BaseBdev1", 00:07:52.425 "uuid": "15ed9b9f-35b9-436e-99f7-59adc1dc5751", 00:07:52.425 "is_configured": true, 00:07:52.425 "data_offset": 2048, 00:07:52.425 "data_size": 63488 00:07:52.425 }, 00:07:52.425 { 00:07:52.425 "name": "BaseBdev2", 00:07:52.425 "uuid": "9d82e27a-3ec0-4986-89e0-c0d71d37c00f", 00:07:52.425 "is_configured": true, 00:07:52.425 "data_offset": 2048, 00:07:52.425 "data_size": 63488 00:07:52.425 } 00:07:52.425 ] 00:07:52.425 } 00:07:52.425 } 00:07:52.425 }' 00:07:52.425 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.683 BaseBdev2' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.683 08:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.683 [2024-12-13 08:19:04.955133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.683 [2024-12-13 08:19:04.955170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.683 [2024-12-13 08:19:04.955225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.947 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.947 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.947 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:52.947 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.947 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.947 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:52.947 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.948 "name": "Existed_Raid", 00:07:52.948 "uuid": "7465e9ba-6660-4a33-96cc-c2e14b4f1fc6", 00:07:52.948 "strip_size_kb": 64, 00:07:52.948 "state": "offline", 00:07:52.948 "raid_level": "raid0", 00:07:52.948 "superblock": true, 00:07:52.948 "num_base_bdevs": 2, 00:07:52.948 "num_base_bdevs_discovered": 1, 00:07:52.948 "num_base_bdevs_operational": 1, 00:07:52.948 "base_bdevs_list": [ 00:07:52.948 { 00:07:52.948 "name": null, 00:07:52.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.948 "is_configured": false, 00:07:52.948 "data_offset": 0, 00:07:52.948 "data_size": 63488 00:07:52.948 }, 00:07:52.948 { 00:07:52.948 "name": "BaseBdev2", 00:07:52.948 "uuid": "9d82e27a-3ec0-4986-89e0-c0d71d37c00f", 00:07:52.948 "is_configured": true, 00:07:52.948 "data_offset": 2048, 00:07:52.948 "data_size": 63488 00:07:52.948 } 00:07:52.948 ] 00:07:52.948 }' 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.948 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.208 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.208 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.208 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.209 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.209 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.209 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.209 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.468 [2024-12-13 08:19:05.603873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.468 [2024-12-13 08:19:05.603977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61106 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61106 ']' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61106 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61106 00:07:53.468 killing process with pid 61106 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61106' 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61106 00:07:53.468 [2024-12-13 08:19:05.793237] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.468 08:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61106 00:07:53.468 [2024-12-13 08:19:05.810837] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.846 08:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:54.846 00:07:54.846 real 0m5.144s 00:07:54.846 user 0m7.449s 00:07:54.846 sys 0m0.805s 00:07:54.846 ************************************ 00:07:54.846 END TEST raid_state_function_test_sb 00:07:54.846 ************************************ 00:07:54.846 08:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.846 08:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.846 08:19:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:54.846 08:19:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.846 08:19:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.846 08:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.846 ************************************ 00:07:54.846 START TEST raid_superblock_test 00:07:54.846 ************************************ 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61358 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61358 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61358 ']' 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.846 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.846 [2024-12-13 08:19:07.097377] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:54.846 [2024-12-13 08:19:07.097582] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61358 ] 00:07:55.104 [2024-12-13 08:19:07.256172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.104 [2024-12-13 08:19:07.372822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.363 [2024-12-13 08:19:07.577044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.364 [2024-12-13 08:19:07.577194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.622 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.622 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:55.622 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:55.622 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.622 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.881 08:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.881 malloc1 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.881 [2024-12-13 08:19:08.037190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.881 [2024-12-13 08:19:08.037288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.881 [2024-12-13 08:19:08.037328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:55.881 [2024-12-13 08:19:08.037357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.881 [2024-12-13 08:19:08.039423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.881 [2024-12-13 08:19:08.039494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.881 pt1 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.881 malloc2 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.881 [2024-12-13 08:19:08.092943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.881 [2024-12-13 08:19:08.093041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.881 [2024-12-13 08:19:08.093097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:55.881 [2024-12-13 08:19:08.093141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.881 [2024-12-13 08:19:08.095283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.881 [2024-12-13 08:19:08.095375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.881 pt2 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.881 [2024-12-13 08:19:08.104994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.881 [2024-12-13 08:19:08.106869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.881 [2024-12-13 08:19:08.107074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:55.881 [2024-12-13 08:19:08.107143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.881 [2024-12-13 08:19:08.107412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.881 [2024-12-13 08:19:08.107600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:55.881 [2024-12-13 08:19:08.107643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:55.881 [2024-12-13 08:19:08.107852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.881 "name": "raid_bdev1", 00:07:55.881 "uuid": "8a53ed60-0b84-4c4e-85ec-0eb933a66da6", 00:07:55.881 "strip_size_kb": 64, 00:07:55.881 "state": "online", 00:07:55.881 "raid_level": "raid0", 00:07:55.881 "superblock": true, 00:07:55.881 "num_base_bdevs": 2, 00:07:55.881 "num_base_bdevs_discovered": 2, 00:07:55.881 "num_base_bdevs_operational": 2, 00:07:55.881 "base_bdevs_list": [ 00:07:55.881 { 00:07:55.881 "name": "pt1", 00:07:55.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.881 "is_configured": true, 00:07:55.881 "data_offset": 2048, 00:07:55.881 "data_size": 63488 00:07:55.881 }, 00:07:55.881 { 00:07:55.881 "name": "pt2", 00:07:55.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.881 "is_configured": true, 00:07:55.881 "data_offset": 2048, 00:07:55.881 "data_size": 63488 00:07:55.881 } 00:07:55.881 ] 00:07:55.881 }' 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.881 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.449 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.449 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.449 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.449 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.449 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.450 [2024-12-13 08:19:08.592497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.450 "name": "raid_bdev1", 00:07:56.450 "aliases": [ 00:07:56.450 "8a53ed60-0b84-4c4e-85ec-0eb933a66da6" 00:07:56.450 ], 00:07:56.450 "product_name": "Raid Volume", 00:07:56.450 "block_size": 512, 00:07:56.450 "num_blocks": 126976, 00:07:56.450 "uuid": "8a53ed60-0b84-4c4e-85ec-0eb933a66da6", 00:07:56.450 "assigned_rate_limits": { 00:07:56.450 "rw_ios_per_sec": 0, 00:07:56.450 "rw_mbytes_per_sec": 0, 00:07:56.450 "r_mbytes_per_sec": 0, 00:07:56.450 "w_mbytes_per_sec": 0 00:07:56.450 }, 00:07:56.450 "claimed": false, 00:07:56.450 "zoned": false, 00:07:56.450 "supported_io_types": { 00:07:56.450 "read": true, 00:07:56.450 "write": true, 00:07:56.450 "unmap": true, 00:07:56.450 "flush": true, 00:07:56.450 "reset": true, 00:07:56.450 "nvme_admin": false, 00:07:56.450 "nvme_io": false, 00:07:56.450 "nvme_io_md": false, 00:07:56.450 "write_zeroes": true, 00:07:56.450 "zcopy": false, 00:07:56.450 "get_zone_info": false, 00:07:56.450 "zone_management": false, 00:07:56.450 "zone_append": false, 00:07:56.450 "compare": false, 00:07:56.450 "compare_and_write": false, 00:07:56.450 "abort": false, 00:07:56.450 "seek_hole": false, 00:07:56.450 "seek_data": false, 00:07:56.450 "copy": false, 00:07:56.450 "nvme_iov_md": false 00:07:56.450 }, 00:07:56.450 "memory_domains": [ 00:07:56.450 { 00:07:56.450 "dma_device_id": "system", 00:07:56.450 "dma_device_type": 1 00:07:56.450 }, 00:07:56.450 { 00:07:56.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.450 "dma_device_type": 2 00:07:56.450 }, 00:07:56.450 { 00:07:56.450 "dma_device_id": "system", 00:07:56.450 "dma_device_type": 1 00:07:56.450 }, 00:07:56.450 { 00:07:56.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.450 "dma_device_type": 2 00:07:56.450 } 00:07:56.450 ], 00:07:56.450 "driver_specific": { 00:07:56.450 "raid": { 00:07:56.450 "uuid": "8a53ed60-0b84-4c4e-85ec-0eb933a66da6", 00:07:56.450 "strip_size_kb": 64, 00:07:56.450 "state": "online", 00:07:56.450 "raid_level": "raid0", 00:07:56.450 "superblock": true, 00:07:56.450 "num_base_bdevs": 2, 00:07:56.450 "num_base_bdevs_discovered": 2, 00:07:56.450 "num_base_bdevs_operational": 2, 00:07:56.450 "base_bdevs_list": [ 00:07:56.450 { 00:07:56.450 "name": "pt1", 00:07:56.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.450 "is_configured": true, 00:07:56.450 "data_offset": 2048, 00:07:56.450 "data_size": 63488 00:07:56.450 }, 00:07:56.450 { 00:07:56.450 "name": "pt2", 00:07:56.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.450 "is_configured": true, 00:07:56.450 "data_offset": 2048, 00:07:56.450 "data_size": 63488 00:07:56.450 } 00:07:56.450 ] 00:07:56.450 } 00:07:56.450 } 00:07:56.450 }' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.450 pt2' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.450 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.733 [2024-12-13 08:19:08.828302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8a53ed60-0b84-4c4e-85ec-0eb933a66da6 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8a53ed60-0b84-4c4e-85ec-0eb933a66da6 ']' 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.733 [2024-12-13 08:19:08.875691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.733 [2024-12-13 08:19:08.875796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.733 [2024-12-13 08:19:08.875934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.733 [2024-12-13 08:19:08.876026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.733 [2024-12-13 08:19:08.876089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:56.733 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:56.734 08:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 [2024-12-13 08:19:09.015515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:56.734 [2024-12-13 08:19:09.017823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:56.734 [2024-12-13 08:19:09.017962] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:56.734 [2024-12-13 08:19:09.018073] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:56.734 [2024-12-13 08:19:09.018162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.734 [2024-12-13 08:19:09.018203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:56.734 request: 00:07:56.734 { 00:07:56.734 "name": "raid_bdev1", 00:07:56.734 "raid_level": "raid0", 00:07:56.734 "base_bdevs": [ 00:07:56.734 "malloc1", 00:07:56.734 "malloc2" 00:07:56.734 ], 00:07:56.734 "strip_size_kb": 64, 00:07:56.734 "superblock": false, 00:07:56.734 "method": "bdev_raid_create", 00:07:56.734 "req_id": 1 00:07:56.734 } 00:07:56.734 Got JSON-RPC error response 00:07:56.734 response: 00:07:56.734 { 00:07:56.734 "code": -17, 00:07:56.734 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:56.734 } 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.734 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 [2024-12-13 08:19:09.079348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.734 [2024-12-13 08:19:09.079418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.734 [2024-12-13 08:19:09.079439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:56.734 [2024-12-13 08:19:09.079451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.015 [2024-12-13 08:19:09.081865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.015 [2024-12-13 08:19:09.081951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.015 [2024-12-13 08:19:09.082059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.015 [2024-12-13 08:19:09.082139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.015 pt1 00:07:57.015 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.015 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:57.015 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.015 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.015 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.016 "name": "raid_bdev1", 00:07:57.016 "uuid": "8a53ed60-0b84-4c4e-85ec-0eb933a66da6", 00:07:57.016 "strip_size_kb": 64, 00:07:57.016 "state": "configuring", 00:07:57.016 "raid_level": "raid0", 00:07:57.016 "superblock": true, 00:07:57.016 "num_base_bdevs": 2, 00:07:57.016 "num_base_bdevs_discovered": 1, 00:07:57.016 "num_base_bdevs_operational": 2, 00:07:57.016 "base_bdevs_list": [ 00:07:57.016 { 00:07:57.016 "name": "pt1", 00:07:57.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.016 "is_configured": true, 00:07:57.016 "data_offset": 2048, 00:07:57.016 "data_size": 63488 00:07:57.016 }, 00:07:57.016 { 00:07:57.016 "name": null, 00:07:57.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.016 "is_configured": false, 00:07:57.016 "data_offset": 2048, 00:07:57.016 "data_size": 63488 00:07:57.016 } 00:07:57.016 ] 00:07:57.016 }' 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.016 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.275 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:57.275 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:57.275 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.275 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.275 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.275 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.275 [2024-12-13 08:19:09.542618] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.275 [2024-12-13 08:19:09.542696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.275 [2024-12-13 08:19:09.542720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:57.275 [2024-12-13 08:19:09.542732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.275 [2024-12-13 08:19:09.543249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.275 [2024-12-13 08:19:09.543289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.275 [2024-12-13 08:19:09.543382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.275 [2024-12-13 08:19:09.543417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.275 [2024-12-13 08:19:09.543538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.275 [2024-12-13 08:19:09.543557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.275 [2024-12-13 08:19:09.543821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:57.275 [2024-12-13 08:19:09.543988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.276 [2024-12-13 08:19:09.544004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:57.276 [2024-12-13 08:19:09.544165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.276 pt2 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.276 "name": "raid_bdev1", 00:07:57.276 "uuid": "8a53ed60-0b84-4c4e-85ec-0eb933a66da6", 00:07:57.276 "strip_size_kb": 64, 00:07:57.276 "state": "online", 00:07:57.276 "raid_level": "raid0", 00:07:57.276 "superblock": true, 00:07:57.276 "num_base_bdevs": 2, 00:07:57.276 "num_base_bdevs_discovered": 2, 00:07:57.276 "num_base_bdevs_operational": 2, 00:07:57.276 "base_bdevs_list": [ 00:07:57.276 { 00:07:57.276 "name": "pt1", 00:07:57.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.276 "is_configured": true, 00:07:57.276 "data_offset": 2048, 00:07:57.276 "data_size": 63488 00:07:57.276 }, 00:07:57.276 { 00:07:57.276 "name": "pt2", 00:07:57.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.276 "is_configured": true, 00:07:57.276 "data_offset": 2048, 00:07:57.276 "data_size": 63488 00:07:57.276 } 00:07:57.276 ] 00:07:57.276 }' 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.276 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 [2024-12-13 08:19:09.970146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.845 "name": "raid_bdev1", 00:07:57.845 "aliases": [ 00:07:57.845 "8a53ed60-0b84-4c4e-85ec-0eb933a66da6" 00:07:57.845 ], 00:07:57.845 "product_name": "Raid Volume", 00:07:57.845 "block_size": 512, 00:07:57.845 "num_blocks": 126976, 00:07:57.845 "uuid": "8a53ed60-0b84-4c4e-85ec-0eb933a66da6", 00:07:57.845 "assigned_rate_limits": { 00:07:57.845 "rw_ios_per_sec": 0, 00:07:57.845 "rw_mbytes_per_sec": 0, 00:07:57.845 "r_mbytes_per_sec": 0, 00:07:57.845 "w_mbytes_per_sec": 0 00:07:57.845 }, 00:07:57.845 "claimed": false, 00:07:57.845 "zoned": false, 00:07:57.845 "supported_io_types": { 00:07:57.845 "read": true, 00:07:57.845 "write": true, 00:07:57.845 "unmap": true, 00:07:57.845 "flush": true, 00:07:57.845 "reset": true, 00:07:57.845 "nvme_admin": false, 00:07:57.845 "nvme_io": false, 00:07:57.845 "nvme_io_md": false, 00:07:57.845 "write_zeroes": true, 00:07:57.845 "zcopy": false, 00:07:57.845 "get_zone_info": false, 00:07:57.845 "zone_management": false, 00:07:57.845 "zone_append": false, 00:07:57.845 "compare": false, 00:07:57.845 "compare_and_write": false, 00:07:57.845 "abort": false, 00:07:57.845 "seek_hole": false, 00:07:57.845 "seek_data": false, 00:07:57.845 "copy": false, 00:07:57.845 "nvme_iov_md": false 00:07:57.845 }, 00:07:57.845 "memory_domains": [ 00:07:57.845 { 00:07:57.845 "dma_device_id": "system", 00:07:57.845 "dma_device_type": 1 00:07:57.845 }, 00:07:57.845 { 00:07:57.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.845 "dma_device_type": 2 00:07:57.845 }, 00:07:57.845 { 00:07:57.845 "dma_device_id": "system", 00:07:57.845 "dma_device_type": 1 00:07:57.845 }, 00:07:57.845 { 00:07:57.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.845 "dma_device_type": 2 00:07:57.845 } 00:07:57.845 ], 00:07:57.845 "driver_specific": { 00:07:57.845 "raid": { 00:07:57.845 "uuid": "8a53ed60-0b84-4c4e-85ec-0eb933a66da6", 00:07:57.845 "strip_size_kb": 64, 00:07:57.845 "state": "online", 00:07:57.845 "raid_level": "raid0", 00:07:57.845 "superblock": true, 00:07:57.845 "num_base_bdevs": 2, 00:07:57.845 "num_base_bdevs_discovered": 2, 00:07:57.845 "num_base_bdevs_operational": 2, 00:07:57.845 "base_bdevs_list": [ 00:07:57.845 { 00:07:57.845 "name": "pt1", 00:07:57.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.845 "is_configured": true, 00:07:57.845 "data_offset": 2048, 00:07:57.845 "data_size": 63488 00:07:57.845 }, 00:07:57.845 { 00:07:57.845 "name": "pt2", 00:07:57.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.845 "is_configured": true, 00:07:57.845 "data_offset": 2048, 00:07:57.845 "data_size": 63488 00:07:57.845 } 00:07:57.845 ] 00:07:57.845 } 00:07:57.845 } 00:07:57.845 }' 00:07:57.845 08:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.845 pt2' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:57.845 [2024-12-13 08:19:10.169780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.845 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8a53ed60-0b84-4c4e-85ec-0eb933a66da6 '!=' 8a53ed60-0b84-4c4e-85ec-0eb933a66da6 ']' 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61358 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61358 ']' 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61358 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61358 00:07:58.105 killing process with pid 61358 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61358' 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61358 00:07:58.105 [2024-12-13 08:19:10.251625] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.105 [2024-12-13 08:19:10.251728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.105 [2024-12-13 08:19:10.251780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.105 [2024-12-13 08:19:10.251793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.105 08:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61358 00:07:58.105 [2024-12-13 08:19:10.463162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.483 08:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:59.483 00:07:59.483 real 0m4.605s 00:07:59.483 user 0m6.481s 00:07:59.483 sys 0m0.766s 00:07:59.483 08:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.483 08:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.483 ************************************ 00:07:59.483 END TEST raid_superblock_test 00:07:59.483 ************************************ 00:07:59.483 08:19:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:59.483 08:19:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:59.483 08:19:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.483 08:19:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.483 ************************************ 00:07:59.483 START TEST raid_read_error_test 00:07:59.483 ************************************ 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MhZJ9FJMkO 00:07:59.483 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61570 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61570 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61570 ']' 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.484 08:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.484 [2024-12-13 08:19:11.784759] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:59.484 [2024-12-13 08:19:11.784866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61570 ] 00:07:59.743 [2024-12-13 08:19:11.954989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.743 [2024-12-13 08:19:12.071775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.002 [2024-12-13 08:19:12.280240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.002 [2024-12-13 08:19:12.280287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.570 BaseBdev1_malloc 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.570 true 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.570 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.570 [2024-12-13 08:19:12.694094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.571 [2024-12-13 08:19:12.694155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.571 [2024-12-13 08:19:12.694173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.571 [2024-12-13 08:19:12.694183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.571 [2024-12-13 08:19:12.696336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.571 [2024-12-13 08:19:12.696383] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.571 BaseBdev1 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.571 BaseBdev2_malloc 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.571 true 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.571 [2024-12-13 08:19:12.760771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.571 [2024-12-13 08:19:12.760839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.571 [2024-12-13 08:19:12.760860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.571 [2024-12-13 08:19:12.760870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.571 [2024-12-13 08:19:12.763040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.571 [2024-12-13 08:19:12.763079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.571 BaseBdev2 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.571 [2024-12-13 08:19:12.772820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.571 [2024-12-13 08:19:12.774765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.571 [2024-12-13 08:19:12.775036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.571 [2024-12-13 08:19:12.775068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:00.571 [2024-12-13 08:19:12.775364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:00.571 [2024-12-13 08:19:12.775571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.571 [2024-12-13 08:19:12.775595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:00.571 [2024-12-13 08:19:12.775812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.571 "name": "raid_bdev1", 00:08:00.571 "uuid": "dba24ff8-63d3-4e7e-82d6-e8d13e389b44", 00:08:00.571 "strip_size_kb": 64, 00:08:00.571 "state": "online", 00:08:00.571 "raid_level": "raid0", 00:08:00.571 "superblock": true, 00:08:00.571 "num_base_bdevs": 2, 00:08:00.571 "num_base_bdevs_discovered": 2, 00:08:00.571 "num_base_bdevs_operational": 2, 00:08:00.571 "base_bdevs_list": [ 00:08:00.571 { 00:08:00.571 "name": "BaseBdev1", 00:08:00.571 "uuid": "c6863f1c-504b-5dd2-8429-4793b708b0d3", 00:08:00.571 "is_configured": true, 00:08:00.571 "data_offset": 2048, 00:08:00.571 "data_size": 63488 00:08:00.571 }, 00:08:00.571 { 00:08:00.571 "name": "BaseBdev2", 00:08:00.571 "uuid": "2e05fae1-3bbd-588a-a727-f7a3eecd27a1", 00:08:00.571 "is_configured": true, 00:08:00.571 "data_offset": 2048, 00:08:00.571 "data_size": 63488 00:08:00.571 } 00:08:00.571 ] 00:08:00.571 }' 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.571 08:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.830 08:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:00.830 08:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.088 [2024-12-13 08:19:13.273358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.023 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.023 "name": "raid_bdev1", 00:08:02.023 "uuid": "dba24ff8-63d3-4e7e-82d6-e8d13e389b44", 00:08:02.023 "strip_size_kb": 64, 00:08:02.023 "state": "online", 00:08:02.023 "raid_level": "raid0", 00:08:02.023 "superblock": true, 00:08:02.023 "num_base_bdevs": 2, 00:08:02.023 "num_base_bdevs_discovered": 2, 00:08:02.023 "num_base_bdevs_operational": 2, 00:08:02.023 "base_bdevs_list": [ 00:08:02.023 { 00:08:02.023 "name": "BaseBdev1", 00:08:02.023 "uuid": "c6863f1c-504b-5dd2-8429-4793b708b0d3", 00:08:02.023 "is_configured": true, 00:08:02.023 "data_offset": 2048, 00:08:02.023 "data_size": 63488 00:08:02.023 }, 00:08:02.023 { 00:08:02.023 "name": "BaseBdev2", 00:08:02.023 "uuid": "2e05fae1-3bbd-588a-a727-f7a3eecd27a1", 00:08:02.023 "is_configured": true, 00:08:02.024 "data_offset": 2048, 00:08:02.024 "data_size": 63488 00:08:02.024 } 00:08:02.024 ] 00:08:02.024 }' 00:08:02.024 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.024 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.598 [2024-12-13 08:19:14.706382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.598 [2024-12-13 08:19:14.706426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.598 [2024-12-13 08:19:14.709518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.598 [2024-12-13 08:19:14.709568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.598 [2024-12-13 08:19:14.709606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.598 [2024-12-13 08:19:14.709619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:02.598 { 00:08:02.598 "results": [ 00:08:02.598 { 00:08:02.598 "job": "raid_bdev1", 00:08:02.598 "core_mask": "0x1", 00:08:02.598 "workload": "randrw", 00:08:02.598 "percentage": 50, 00:08:02.598 "status": "finished", 00:08:02.598 "queue_depth": 1, 00:08:02.598 "io_size": 131072, 00:08:02.598 "runtime": 1.433876, 00:08:02.598 "iops": 15182.623881005053, 00:08:02.598 "mibps": 1897.8279851256316, 00:08:02.598 "io_failed": 1, 00:08:02.598 "io_timeout": 0, 00:08:02.598 "avg_latency_us": 91.20772567328959, 00:08:02.598 "min_latency_us": 26.717903930131005, 00:08:02.598 "max_latency_us": 1466.6899563318777 00:08:02.598 } 00:08:02.598 ], 00:08:02.598 "core_count": 1 00:08:02.598 } 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61570 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61570 ']' 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61570 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61570 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.598 killing process with pid 61570 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61570' 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61570 00:08:02.598 [2024-12-13 08:19:14.754408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.598 08:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61570 00:08:02.598 [2024-12-13 08:19:14.887849] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MhZJ9FJMkO 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:03.981 00:08:03.981 real 0m4.405s 00:08:03.981 user 0m5.290s 00:08:03.981 sys 0m0.550s 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.981 08:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.981 ************************************ 00:08:03.981 END TEST raid_read_error_test 00:08:03.981 ************************************ 00:08:03.981 08:19:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:03.981 08:19:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.981 08:19:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.981 08:19:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.981 ************************************ 00:08:03.981 START TEST raid_write_error_test 00:08:03.981 ************************************ 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NHD2Ior6si 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61710 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61710 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61710 ']' 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.981 08:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.981 [2024-12-13 08:19:16.263962] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:03.981 [2024-12-13 08:19:16.264081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61710 ] 00:08:04.240 [2024-12-13 08:19:16.440017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.240 [2024-12-13 08:19:16.569102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.498 [2024-12-13 08:19:16.773426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.499 [2024-12-13 08:19:16.773500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.767 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.767 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:04.767 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.767 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:04.767 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.767 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.027 BaseBdev1_malloc 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.027 true 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.027 [2024-12-13 08:19:17.181936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.027 [2024-12-13 08:19:17.182006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.027 [2024-12-13 08:19:17.182026] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.027 [2024-12-13 08:19:17.182038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.027 [2024-12-13 08:19:17.184175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.027 [2024-12-13 08:19:17.184214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.027 BaseBdev1 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.027 BaseBdev2_malloc 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.027 true 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.027 [2024-12-13 08:19:17.246950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.027 [2024-12-13 08:19:17.247002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.027 [2024-12-13 08:19:17.247019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.027 [2024-12-13 08:19:17.247030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.027 [2024-12-13 08:19:17.249144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.027 [2024-12-13 08:19:17.249179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.027 BaseBdev2 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.027 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.027 [2024-12-13 08:19:17.258984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.027 [2024-12-13 08:19:17.260815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.028 [2024-12-13 08:19:17.261021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.028 [2024-12-13 08:19:17.261039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:05.028 [2024-12-13 08:19:17.261318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.028 [2024-12-13 08:19:17.261520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.028 [2024-12-13 08:19:17.261543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:05.028 [2024-12-13 08:19:17.261713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.028 "name": "raid_bdev1", 00:08:05.028 "uuid": "7e223b87-42a5-4d6c-a76d-a5e477cc92b2", 00:08:05.028 "strip_size_kb": 64, 00:08:05.028 "state": "online", 00:08:05.028 "raid_level": "raid0", 00:08:05.028 "superblock": true, 00:08:05.028 "num_base_bdevs": 2, 00:08:05.028 "num_base_bdevs_discovered": 2, 00:08:05.028 "num_base_bdevs_operational": 2, 00:08:05.028 "base_bdevs_list": [ 00:08:05.028 { 00:08:05.028 "name": "BaseBdev1", 00:08:05.028 "uuid": "271d7c48-d7a5-58d5-97f6-b8f7a26206c8", 00:08:05.028 "is_configured": true, 00:08:05.028 "data_offset": 2048, 00:08:05.028 "data_size": 63488 00:08:05.028 }, 00:08:05.028 { 00:08:05.028 "name": "BaseBdev2", 00:08:05.028 "uuid": "8ada8ae8-cd54-56e6-9f0a-d88c9f7400f5", 00:08:05.028 "is_configured": true, 00:08:05.028 "data_offset": 2048, 00:08:05.028 "data_size": 63488 00:08:05.028 } 00:08:05.028 ] 00:08:05.028 }' 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.028 08:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.597 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:05.597 08:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:05.597 [2024-12-13 08:19:17.839438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:06.533 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.534 "name": "raid_bdev1", 00:08:06.534 "uuid": "7e223b87-42a5-4d6c-a76d-a5e477cc92b2", 00:08:06.534 "strip_size_kb": 64, 00:08:06.534 "state": "online", 00:08:06.534 "raid_level": "raid0", 00:08:06.534 "superblock": true, 00:08:06.534 "num_base_bdevs": 2, 00:08:06.534 "num_base_bdevs_discovered": 2, 00:08:06.534 "num_base_bdevs_operational": 2, 00:08:06.534 "base_bdevs_list": [ 00:08:06.534 { 00:08:06.534 "name": "BaseBdev1", 00:08:06.534 "uuid": "271d7c48-d7a5-58d5-97f6-b8f7a26206c8", 00:08:06.534 "is_configured": true, 00:08:06.534 "data_offset": 2048, 00:08:06.534 "data_size": 63488 00:08:06.534 }, 00:08:06.534 { 00:08:06.534 "name": "BaseBdev2", 00:08:06.534 "uuid": "8ada8ae8-cd54-56e6-9f0a-d88c9f7400f5", 00:08:06.534 "is_configured": true, 00:08:06.534 "data_offset": 2048, 00:08:06.534 "data_size": 63488 00:08:06.534 } 00:08:06.534 ] 00:08:06.534 }' 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.534 08:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.102 [2024-12-13 08:19:19.211873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.102 [2024-12-13 08:19:19.211922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.102 [2024-12-13 08:19:19.214808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.102 [2024-12-13 08:19:19.214860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.102 [2024-12-13 08:19:19.214893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.102 [2024-12-13 08:19:19.214905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:07.102 { 00:08:07.102 "results": [ 00:08:07.102 { 00:08:07.102 "job": "raid_bdev1", 00:08:07.102 "core_mask": "0x1", 00:08:07.102 "workload": "randrw", 00:08:07.102 "percentage": 50, 00:08:07.102 "status": "finished", 00:08:07.102 "queue_depth": 1, 00:08:07.102 "io_size": 131072, 00:08:07.102 "runtime": 1.373217, 00:08:07.102 "iops": 14965.588104429235, 00:08:07.102 "mibps": 1870.6985130536543, 00:08:07.102 "io_failed": 1, 00:08:07.102 "io_timeout": 0, 00:08:07.102 "avg_latency_us": 92.50932430847475, 00:08:07.102 "min_latency_us": 26.717903930131005, 00:08:07.102 "max_latency_us": 1452.380786026201 00:08:07.102 } 00:08:07.102 ], 00:08:07.102 "core_count": 1 00:08:07.102 } 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61710 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61710 ']' 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61710 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61710 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.102 killing process with pid 61710 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61710' 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61710 00:08:07.102 [2024-12-13 08:19:19.250672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.102 08:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61710 00:08:07.102 [2024-12-13 08:19:19.388365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NHD2Ior6si 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:08.482 00:08:08.482 real 0m4.450s 00:08:08.482 user 0m5.377s 00:08:08.482 sys 0m0.531s 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.482 08:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 ************************************ 00:08:08.482 END TEST raid_write_error_test 00:08:08.482 ************************************ 00:08:08.482 08:19:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:08.482 08:19:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:08.482 08:19:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.482 08:19:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.482 08:19:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.482 ************************************ 00:08:08.482 START TEST raid_state_function_test 00:08:08.482 ************************************ 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.482 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61854 00:08:08.483 Process raid pid: 61854 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61854' 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61854 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61854 ']' 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.483 08:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.483 [2024-12-13 08:19:20.772484] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:08.483 [2024-12-13 08:19:20.772602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.767 [2024-12-13 08:19:20.946880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.767 [2024-12-13 08:19:21.069360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.041 [2024-12-13 08:19:21.270392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.041 [2024-12-13 08:19:21.270440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.300 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.300 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.301 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.301 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.301 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.301 [2024-12-13 08:19:21.660928] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.301 [2024-12-13 08:19:21.660988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.301 [2024-12-13 08:19:21.660999] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.301 [2024-12-13 08:19:21.661010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.560 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.561 "name": "Existed_Raid", 00:08:09.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.561 "strip_size_kb": 64, 00:08:09.561 "state": "configuring", 00:08:09.561 "raid_level": "concat", 00:08:09.561 "superblock": false, 00:08:09.561 "num_base_bdevs": 2, 00:08:09.561 "num_base_bdevs_discovered": 0, 00:08:09.561 "num_base_bdevs_operational": 2, 00:08:09.561 "base_bdevs_list": [ 00:08:09.561 { 00:08:09.561 "name": "BaseBdev1", 00:08:09.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.561 "is_configured": false, 00:08:09.561 "data_offset": 0, 00:08:09.561 "data_size": 0 00:08:09.561 }, 00:08:09.561 { 00:08:09.561 "name": "BaseBdev2", 00:08:09.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.561 "is_configured": false, 00:08:09.561 "data_offset": 0, 00:08:09.561 "data_size": 0 00:08:09.561 } 00:08:09.561 ] 00:08:09.561 }' 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.561 08:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.821 [2024-12-13 08:19:22.104133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.821 [2024-12-13 08:19:22.104178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.821 [2024-12-13 08:19:22.116076] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.821 [2024-12-13 08:19:22.116135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.821 [2024-12-13 08:19:22.116147] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.821 [2024-12-13 08:19:22.116160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.821 [2024-12-13 08:19:22.165234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.821 BaseBdev1 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.821 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.080 [ 00:08:10.080 { 00:08:10.080 "name": "BaseBdev1", 00:08:10.080 "aliases": [ 00:08:10.080 "302bd641-a195-40ca-8943-b1ca16474b43" 00:08:10.080 ], 00:08:10.080 "product_name": "Malloc disk", 00:08:10.080 "block_size": 512, 00:08:10.080 "num_blocks": 65536, 00:08:10.080 "uuid": "302bd641-a195-40ca-8943-b1ca16474b43", 00:08:10.080 "assigned_rate_limits": { 00:08:10.080 "rw_ios_per_sec": 0, 00:08:10.080 "rw_mbytes_per_sec": 0, 00:08:10.080 "r_mbytes_per_sec": 0, 00:08:10.080 "w_mbytes_per_sec": 0 00:08:10.080 }, 00:08:10.080 "claimed": true, 00:08:10.080 "claim_type": "exclusive_write", 00:08:10.080 "zoned": false, 00:08:10.080 "supported_io_types": { 00:08:10.080 "read": true, 00:08:10.080 "write": true, 00:08:10.080 "unmap": true, 00:08:10.080 "flush": true, 00:08:10.080 "reset": true, 00:08:10.080 "nvme_admin": false, 00:08:10.080 "nvme_io": false, 00:08:10.080 "nvme_io_md": false, 00:08:10.080 "write_zeroes": true, 00:08:10.080 "zcopy": true, 00:08:10.080 "get_zone_info": false, 00:08:10.080 "zone_management": false, 00:08:10.080 "zone_append": false, 00:08:10.080 "compare": false, 00:08:10.080 "compare_and_write": false, 00:08:10.080 "abort": true, 00:08:10.080 "seek_hole": false, 00:08:10.080 "seek_data": false, 00:08:10.080 "copy": true, 00:08:10.080 "nvme_iov_md": false 00:08:10.080 }, 00:08:10.080 "memory_domains": [ 00:08:10.080 { 00:08:10.080 "dma_device_id": "system", 00:08:10.080 "dma_device_type": 1 00:08:10.080 }, 00:08:10.080 { 00:08:10.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.081 "dma_device_type": 2 00:08:10.081 } 00:08:10.081 ], 00:08:10.081 "driver_specific": {} 00:08:10.081 } 00:08:10.081 ] 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.081 "name": "Existed_Raid", 00:08:10.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.081 "strip_size_kb": 64, 00:08:10.081 "state": "configuring", 00:08:10.081 "raid_level": "concat", 00:08:10.081 "superblock": false, 00:08:10.081 "num_base_bdevs": 2, 00:08:10.081 "num_base_bdevs_discovered": 1, 00:08:10.081 "num_base_bdevs_operational": 2, 00:08:10.081 "base_bdevs_list": [ 00:08:10.081 { 00:08:10.081 "name": "BaseBdev1", 00:08:10.081 "uuid": "302bd641-a195-40ca-8943-b1ca16474b43", 00:08:10.081 "is_configured": true, 00:08:10.081 "data_offset": 0, 00:08:10.081 "data_size": 65536 00:08:10.081 }, 00:08:10.081 { 00:08:10.081 "name": "BaseBdev2", 00:08:10.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.081 "is_configured": false, 00:08:10.081 "data_offset": 0, 00:08:10.081 "data_size": 0 00:08:10.081 } 00:08:10.081 ] 00:08:10.081 }' 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.081 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.340 [2024-12-13 08:19:22.644488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.340 [2024-12-13 08:19:22.644550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.340 [2024-12-13 08:19:22.652548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.340 [2024-12-13 08:19:22.654597] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.340 [2024-12-13 08:19:22.654668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.340 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.341 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.600 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.600 "name": "Existed_Raid", 00:08:10.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.600 "strip_size_kb": 64, 00:08:10.600 "state": "configuring", 00:08:10.600 "raid_level": "concat", 00:08:10.600 "superblock": false, 00:08:10.600 "num_base_bdevs": 2, 00:08:10.600 "num_base_bdevs_discovered": 1, 00:08:10.600 "num_base_bdevs_operational": 2, 00:08:10.600 "base_bdevs_list": [ 00:08:10.600 { 00:08:10.600 "name": "BaseBdev1", 00:08:10.600 "uuid": "302bd641-a195-40ca-8943-b1ca16474b43", 00:08:10.600 "is_configured": true, 00:08:10.600 "data_offset": 0, 00:08:10.600 "data_size": 65536 00:08:10.600 }, 00:08:10.600 { 00:08:10.600 "name": "BaseBdev2", 00:08:10.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.600 "is_configured": false, 00:08:10.600 "data_offset": 0, 00:08:10.600 "data_size": 0 00:08:10.600 } 00:08:10.600 ] 00:08:10.600 }' 00:08:10.600 08:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.600 08:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.860 [2024-12-13 08:19:23.170306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.860 [2024-12-13 08:19:23.170364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.860 [2024-12-13 08:19:23.170372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:10.860 [2024-12-13 08:19:23.170608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:10.860 [2024-12-13 08:19:23.170807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.860 [2024-12-13 08:19:23.170835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:10.860 [2024-12-13 08:19:23.171093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.860 BaseBdev2 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.860 [ 00:08:10.860 { 00:08:10.860 "name": "BaseBdev2", 00:08:10.860 "aliases": [ 00:08:10.860 "60b7a0ab-b1b0-4b18-bcbf-5acbed5cf498" 00:08:10.860 ], 00:08:10.860 "product_name": "Malloc disk", 00:08:10.860 "block_size": 512, 00:08:10.860 "num_blocks": 65536, 00:08:10.860 "uuid": "60b7a0ab-b1b0-4b18-bcbf-5acbed5cf498", 00:08:10.860 "assigned_rate_limits": { 00:08:10.860 "rw_ios_per_sec": 0, 00:08:10.860 "rw_mbytes_per_sec": 0, 00:08:10.860 "r_mbytes_per_sec": 0, 00:08:10.860 "w_mbytes_per_sec": 0 00:08:10.860 }, 00:08:10.860 "claimed": true, 00:08:10.860 "claim_type": "exclusive_write", 00:08:10.860 "zoned": false, 00:08:10.860 "supported_io_types": { 00:08:10.860 "read": true, 00:08:10.860 "write": true, 00:08:10.860 "unmap": true, 00:08:10.860 "flush": true, 00:08:10.860 "reset": true, 00:08:10.860 "nvme_admin": false, 00:08:10.860 "nvme_io": false, 00:08:10.860 "nvme_io_md": false, 00:08:10.860 "write_zeroes": true, 00:08:10.860 "zcopy": true, 00:08:10.860 "get_zone_info": false, 00:08:10.860 "zone_management": false, 00:08:10.860 "zone_append": false, 00:08:10.860 "compare": false, 00:08:10.860 "compare_and_write": false, 00:08:10.860 "abort": true, 00:08:10.860 "seek_hole": false, 00:08:10.860 "seek_data": false, 00:08:10.860 "copy": true, 00:08:10.860 "nvme_iov_md": false 00:08:10.860 }, 00:08:10.860 "memory_domains": [ 00:08:10.860 { 00:08:10.860 "dma_device_id": "system", 00:08:10.860 "dma_device_type": 1 00:08:10.860 }, 00:08:10.860 { 00:08:10.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.860 "dma_device_type": 2 00:08:10.860 } 00:08:10.860 ], 00:08:10.860 "driver_specific": {} 00:08:10.860 } 00:08:10.860 ] 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.860 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.120 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.120 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.120 "name": "Existed_Raid", 00:08:11.120 "uuid": "4ad5d313-0d6b-4f4b-a2e9-ba849854c792", 00:08:11.120 "strip_size_kb": 64, 00:08:11.120 "state": "online", 00:08:11.120 "raid_level": "concat", 00:08:11.120 "superblock": false, 00:08:11.120 "num_base_bdevs": 2, 00:08:11.120 "num_base_bdevs_discovered": 2, 00:08:11.120 "num_base_bdevs_operational": 2, 00:08:11.120 "base_bdevs_list": [ 00:08:11.120 { 00:08:11.120 "name": "BaseBdev1", 00:08:11.120 "uuid": "302bd641-a195-40ca-8943-b1ca16474b43", 00:08:11.120 "is_configured": true, 00:08:11.120 "data_offset": 0, 00:08:11.120 "data_size": 65536 00:08:11.120 }, 00:08:11.120 { 00:08:11.120 "name": "BaseBdev2", 00:08:11.120 "uuid": "60b7a0ab-b1b0-4b18-bcbf-5acbed5cf498", 00:08:11.120 "is_configured": true, 00:08:11.120 "data_offset": 0, 00:08:11.120 "data_size": 65536 00:08:11.120 } 00:08:11.120 ] 00:08:11.120 }' 00:08:11.120 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.120 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.380 [2024-12-13 08:19:23.609908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.380 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.380 "name": "Existed_Raid", 00:08:11.380 "aliases": [ 00:08:11.380 "4ad5d313-0d6b-4f4b-a2e9-ba849854c792" 00:08:11.380 ], 00:08:11.380 "product_name": "Raid Volume", 00:08:11.380 "block_size": 512, 00:08:11.380 "num_blocks": 131072, 00:08:11.380 "uuid": "4ad5d313-0d6b-4f4b-a2e9-ba849854c792", 00:08:11.380 "assigned_rate_limits": { 00:08:11.380 "rw_ios_per_sec": 0, 00:08:11.380 "rw_mbytes_per_sec": 0, 00:08:11.380 "r_mbytes_per_sec": 0, 00:08:11.380 "w_mbytes_per_sec": 0 00:08:11.380 }, 00:08:11.380 "claimed": false, 00:08:11.380 "zoned": false, 00:08:11.380 "supported_io_types": { 00:08:11.380 "read": true, 00:08:11.380 "write": true, 00:08:11.380 "unmap": true, 00:08:11.380 "flush": true, 00:08:11.380 "reset": true, 00:08:11.380 "nvme_admin": false, 00:08:11.380 "nvme_io": false, 00:08:11.380 "nvme_io_md": false, 00:08:11.380 "write_zeroes": true, 00:08:11.380 "zcopy": false, 00:08:11.380 "get_zone_info": false, 00:08:11.380 "zone_management": false, 00:08:11.380 "zone_append": false, 00:08:11.380 "compare": false, 00:08:11.380 "compare_and_write": false, 00:08:11.380 "abort": false, 00:08:11.380 "seek_hole": false, 00:08:11.380 "seek_data": false, 00:08:11.380 "copy": false, 00:08:11.380 "nvme_iov_md": false 00:08:11.380 }, 00:08:11.380 "memory_domains": [ 00:08:11.380 { 00:08:11.380 "dma_device_id": "system", 00:08:11.380 "dma_device_type": 1 00:08:11.380 }, 00:08:11.380 { 00:08:11.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.380 "dma_device_type": 2 00:08:11.380 }, 00:08:11.380 { 00:08:11.380 "dma_device_id": "system", 00:08:11.380 "dma_device_type": 1 00:08:11.380 }, 00:08:11.380 { 00:08:11.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.380 "dma_device_type": 2 00:08:11.380 } 00:08:11.380 ], 00:08:11.380 "driver_specific": { 00:08:11.380 "raid": { 00:08:11.381 "uuid": "4ad5d313-0d6b-4f4b-a2e9-ba849854c792", 00:08:11.381 "strip_size_kb": 64, 00:08:11.381 "state": "online", 00:08:11.381 "raid_level": "concat", 00:08:11.381 "superblock": false, 00:08:11.381 "num_base_bdevs": 2, 00:08:11.381 "num_base_bdevs_discovered": 2, 00:08:11.381 "num_base_bdevs_operational": 2, 00:08:11.381 "base_bdevs_list": [ 00:08:11.381 { 00:08:11.381 "name": "BaseBdev1", 00:08:11.381 "uuid": "302bd641-a195-40ca-8943-b1ca16474b43", 00:08:11.381 "is_configured": true, 00:08:11.381 "data_offset": 0, 00:08:11.381 "data_size": 65536 00:08:11.381 }, 00:08:11.381 { 00:08:11.381 "name": "BaseBdev2", 00:08:11.381 "uuid": "60b7a0ab-b1b0-4b18-bcbf-5acbed5cf498", 00:08:11.381 "is_configured": true, 00:08:11.381 "data_offset": 0, 00:08:11.381 "data_size": 65536 00:08:11.381 } 00:08:11.381 ] 00:08:11.381 } 00:08:11.381 } 00:08:11.381 }' 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.381 BaseBdev2' 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.381 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.641 [2024-12-13 08:19:23.813342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.641 [2024-12-13 08:19:23.813386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.641 [2024-12-13 08:19:23.813455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.641 "name": "Existed_Raid", 00:08:11.641 "uuid": "4ad5d313-0d6b-4f4b-a2e9-ba849854c792", 00:08:11.641 "strip_size_kb": 64, 00:08:11.641 "state": "offline", 00:08:11.641 "raid_level": "concat", 00:08:11.641 "superblock": false, 00:08:11.641 "num_base_bdevs": 2, 00:08:11.641 "num_base_bdevs_discovered": 1, 00:08:11.641 "num_base_bdevs_operational": 1, 00:08:11.641 "base_bdevs_list": [ 00:08:11.641 { 00:08:11.641 "name": null, 00:08:11.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.641 "is_configured": false, 00:08:11.641 "data_offset": 0, 00:08:11.641 "data_size": 65536 00:08:11.641 }, 00:08:11.641 { 00:08:11.641 "name": "BaseBdev2", 00:08:11.641 "uuid": "60b7a0ab-b1b0-4b18-bcbf-5acbed5cf498", 00:08:11.641 "is_configured": true, 00:08:11.641 "data_offset": 0, 00:08:11.641 "data_size": 65536 00:08:11.641 } 00:08:11.641 ] 00:08:11.641 }' 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.641 08:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.901 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.901 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.901 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.901 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.161 [2024-12-13 08:19:24.317927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.161 [2024-12-13 08:19:24.318014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61854 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61854 ']' 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61854 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61854 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.161 killing process with pid 61854 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61854' 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61854 00:08:12.161 [2024-12-13 08:19:24.490930] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.161 08:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61854 00:08:12.161 [2024-12-13 08:19:24.507268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.539 00:08:13.539 real 0m4.942s 00:08:13.539 user 0m7.137s 00:08:13.539 sys 0m0.785s 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 ************************************ 00:08:13.539 END TEST raid_state_function_test 00:08:13.539 ************************************ 00:08:13.539 08:19:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:13.539 08:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:13.539 08:19:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.539 08:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 ************************************ 00:08:13.539 START TEST raid_state_function_test_sb 00:08:13.539 ************************************ 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62107 00:08:13.539 Process raid pid: 62107 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62107' 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62107 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62107 ']' 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.539 08:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 [2024-12-13 08:19:25.789809] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:13.539 [2024-12-13 08:19:25.789929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.829 [2024-12-13 08:19:25.964591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.829 [2024-12-13 08:19:26.083556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.090 [2024-12-13 08:19:26.294867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.090 [2024-12-13 08:19:26.294952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.349 [2024-12-13 08:19:26.625546] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.349 [2024-12-13 08:19:26.625628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.349 [2024-12-13 08:19:26.625657] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.349 [2024-12-13 08:19:26.625678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.349 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.350 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.350 "name": "Existed_Raid", 00:08:14.350 "uuid": "ae6970f2-c62b-4660-bec7-a8f2e3d562d3", 00:08:14.350 "strip_size_kb": 64, 00:08:14.350 "state": "configuring", 00:08:14.350 "raid_level": "concat", 00:08:14.350 "superblock": true, 00:08:14.350 "num_base_bdevs": 2, 00:08:14.350 "num_base_bdevs_discovered": 0, 00:08:14.350 "num_base_bdevs_operational": 2, 00:08:14.350 "base_bdevs_list": [ 00:08:14.350 { 00:08:14.350 "name": "BaseBdev1", 00:08:14.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.350 "is_configured": false, 00:08:14.350 "data_offset": 0, 00:08:14.350 "data_size": 0 00:08:14.350 }, 00:08:14.350 { 00:08:14.350 "name": "BaseBdev2", 00:08:14.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.350 "is_configured": false, 00:08:14.350 "data_offset": 0, 00:08:14.350 "data_size": 0 00:08:14.350 } 00:08:14.350 ] 00:08:14.350 }' 00:08:14.350 08:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.350 08:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.919 [2024-12-13 08:19:27.076746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.919 [2024-12-13 08:19:27.076813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.919 [2024-12-13 08:19:27.088696] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.919 [2024-12-13 08:19:27.088747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.919 [2024-12-13 08:19:27.088758] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.919 [2024-12-13 08:19:27.088772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.919 [2024-12-13 08:19:27.145895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.919 BaseBdev1 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.919 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.919 [ 00:08:14.919 { 00:08:14.919 "name": "BaseBdev1", 00:08:14.919 "aliases": [ 00:08:14.919 "43ebe111-90ed-46cf-a931-b09156b020e4" 00:08:14.919 ], 00:08:14.919 "product_name": "Malloc disk", 00:08:14.919 "block_size": 512, 00:08:14.919 "num_blocks": 65536, 00:08:14.919 "uuid": "43ebe111-90ed-46cf-a931-b09156b020e4", 00:08:14.919 "assigned_rate_limits": { 00:08:14.920 "rw_ios_per_sec": 0, 00:08:14.920 "rw_mbytes_per_sec": 0, 00:08:14.920 "r_mbytes_per_sec": 0, 00:08:14.920 "w_mbytes_per_sec": 0 00:08:14.920 }, 00:08:14.920 "claimed": true, 00:08:14.920 "claim_type": "exclusive_write", 00:08:14.920 "zoned": false, 00:08:14.920 "supported_io_types": { 00:08:14.920 "read": true, 00:08:14.920 "write": true, 00:08:14.920 "unmap": true, 00:08:14.920 "flush": true, 00:08:14.920 "reset": true, 00:08:14.920 "nvme_admin": false, 00:08:14.920 "nvme_io": false, 00:08:14.920 "nvme_io_md": false, 00:08:14.920 "write_zeroes": true, 00:08:14.920 "zcopy": true, 00:08:14.920 "get_zone_info": false, 00:08:14.920 "zone_management": false, 00:08:14.920 "zone_append": false, 00:08:14.920 "compare": false, 00:08:14.920 "compare_and_write": false, 00:08:14.920 "abort": true, 00:08:14.920 "seek_hole": false, 00:08:14.920 "seek_data": false, 00:08:14.920 "copy": true, 00:08:14.920 "nvme_iov_md": false 00:08:14.920 }, 00:08:14.920 "memory_domains": [ 00:08:14.920 { 00:08:14.920 "dma_device_id": "system", 00:08:14.920 "dma_device_type": 1 00:08:14.920 }, 00:08:14.920 { 00:08:14.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.920 "dma_device_type": 2 00:08:14.920 } 00:08:14.920 ], 00:08:14.920 "driver_specific": {} 00:08:14.920 } 00:08:14.920 ] 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.920 "name": "Existed_Raid", 00:08:14.920 "uuid": "be70a4c2-abde-4179-8f08-28cc779620ba", 00:08:14.920 "strip_size_kb": 64, 00:08:14.920 "state": "configuring", 00:08:14.920 "raid_level": "concat", 00:08:14.920 "superblock": true, 00:08:14.920 "num_base_bdevs": 2, 00:08:14.920 "num_base_bdevs_discovered": 1, 00:08:14.920 "num_base_bdevs_operational": 2, 00:08:14.920 "base_bdevs_list": [ 00:08:14.920 { 00:08:14.920 "name": "BaseBdev1", 00:08:14.920 "uuid": "43ebe111-90ed-46cf-a931-b09156b020e4", 00:08:14.920 "is_configured": true, 00:08:14.920 "data_offset": 2048, 00:08:14.920 "data_size": 63488 00:08:14.920 }, 00:08:14.920 { 00:08:14.920 "name": "BaseBdev2", 00:08:14.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.920 "is_configured": false, 00:08:14.920 "data_offset": 0, 00:08:14.920 "data_size": 0 00:08:14.920 } 00:08:14.920 ] 00:08:14.920 }' 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.920 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.489 [2024-12-13 08:19:27.653183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.489 [2024-12-13 08:19:27.653273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.489 [2024-12-13 08:19:27.665164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.489 [2024-12-13 08:19:27.667478] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.489 [2024-12-13 08:19:27.667525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.489 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.489 "name": "Existed_Raid", 00:08:15.489 "uuid": "21be5cc1-95ee-4ca1-a7e8-a92c232714ff", 00:08:15.489 "strip_size_kb": 64, 00:08:15.489 "state": "configuring", 00:08:15.489 "raid_level": "concat", 00:08:15.489 "superblock": true, 00:08:15.489 "num_base_bdevs": 2, 00:08:15.489 "num_base_bdevs_discovered": 1, 00:08:15.489 "num_base_bdevs_operational": 2, 00:08:15.489 "base_bdevs_list": [ 00:08:15.489 { 00:08:15.489 "name": "BaseBdev1", 00:08:15.490 "uuid": "43ebe111-90ed-46cf-a931-b09156b020e4", 00:08:15.490 "is_configured": true, 00:08:15.490 "data_offset": 2048, 00:08:15.490 "data_size": 63488 00:08:15.490 }, 00:08:15.490 { 00:08:15.490 "name": "BaseBdev2", 00:08:15.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.490 "is_configured": false, 00:08:15.490 "data_offset": 0, 00:08:15.490 "data_size": 0 00:08:15.490 } 00:08:15.490 ] 00:08:15.490 }' 00:08:15.490 08:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.490 08:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.749 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.749 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.749 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.008 [2024-12-13 08:19:28.116406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.008 [2024-12-13 08:19:28.116710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.008 [2024-12-13 08:19:28.116726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:16.008 [2024-12-13 08:19:28.117009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:16.008 [2024-12-13 08:19:28.117186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.008 [2024-12-13 08:19:28.117207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:16.008 BaseBdev2 00:08:16.008 [2024-12-13 08:19:28.117350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.008 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.008 [ 00:08:16.008 { 00:08:16.008 "name": "BaseBdev2", 00:08:16.008 "aliases": [ 00:08:16.008 "1c987b14-a9c3-4220-973d-4df5cff82632" 00:08:16.008 ], 00:08:16.008 "product_name": "Malloc disk", 00:08:16.008 "block_size": 512, 00:08:16.009 "num_blocks": 65536, 00:08:16.009 "uuid": "1c987b14-a9c3-4220-973d-4df5cff82632", 00:08:16.009 "assigned_rate_limits": { 00:08:16.009 "rw_ios_per_sec": 0, 00:08:16.009 "rw_mbytes_per_sec": 0, 00:08:16.009 "r_mbytes_per_sec": 0, 00:08:16.009 "w_mbytes_per_sec": 0 00:08:16.009 }, 00:08:16.009 "claimed": true, 00:08:16.009 "claim_type": "exclusive_write", 00:08:16.009 "zoned": false, 00:08:16.009 "supported_io_types": { 00:08:16.009 "read": true, 00:08:16.009 "write": true, 00:08:16.009 "unmap": true, 00:08:16.009 "flush": true, 00:08:16.009 "reset": true, 00:08:16.009 "nvme_admin": false, 00:08:16.009 "nvme_io": false, 00:08:16.009 "nvme_io_md": false, 00:08:16.009 "write_zeroes": true, 00:08:16.009 "zcopy": true, 00:08:16.009 "get_zone_info": false, 00:08:16.009 "zone_management": false, 00:08:16.009 "zone_append": false, 00:08:16.009 "compare": false, 00:08:16.009 "compare_and_write": false, 00:08:16.009 "abort": true, 00:08:16.009 "seek_hole": false, 00:08:16.009 "seek_data": false, 00:08:16.009 "copy": true, 00:08:16.009 "nvme_iov_md": false 00:08:16.009 }, 00:08:16.009 "memory_domains": [ 00:08:16.009 { 00:08:16.009 "dma_device_id": "system", 00:08:16.009 "dma_device_type": 1 00:08:16.009 }, 00:08:16.009 { 00:08:16.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.009 "dma_device_type": 2 00:08:16.009 } 00:08:16.009 ], 00:08:16.009 "driver_specific": {} 00:08:16.009 } 00:08:16.009 ] 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.009 "name": "Existed_Raid", 00:08:16.009 "uuid": "21be5cc1-95ee-4ca1-a7e8-a92c232714ff", 00:08:16.009 "strip_size_kb": 64, 00:08:16.009 "state": "online", 00:08:16.009 "raid_level": "concat", 00:08:16.009 "superblock": true, 00:08:16.009 "num_base_bdevs": 2, 00:08:16.009 "num_base_bdevs_discovered": 2, 00:08:16.009 "num_base_bdevs_operational": 2, 00:08:16.009 "base_bdevs_list": [ 00:08:16.009 { 00:08:16.009 "name": "BaseBdev1", 00:08:16.009 "uuid": "43ebe111-90ed-46cf-a931-b09156b020e4", 00:08:16.009 "is_configured": true, 00:08:16.009 "data_offset": 2048, 00:08:16.009 "data_size": 63488 00:08:16.009 }, 00:08:16.009 { 00:08:16.009 "name": "BaseBdev2", 00:08:16.009 "uuid": "1c987b14-a9c3-4220-973d-4df5cff82632", 00:08:16.009 "is_configured": true, 00:08:16.009 "data_offset": 2048, 00:08:16.009 "data_size": 63488 00:08:16.009 } 00:08:16.009 ] 00:08:16.009 }' 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.009 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.269 [2024-12-13 08:19:28.579986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.269 "name": "Existed_Raid", 00:08:16.269 "aliases": [ 00:08:16.269 "21be5cc1-95ee-4ca1-a7e8-a92c232714ff" 00:08:16.269 ], 00:08:16.269 "product_name": "Raid Volume", 00:08:16.269 "block_size": 512, 00:08:16.269 "num_blocks": 126976, 00:08:16.269 "uuid": "21be5cc1-95ee-4ca1-a7e8-a92c232714ff", 00:08:16.269 "assigned_rate_limits": { 00:08:16.269 "rw_ios_per_sec": 0, 00:08:16.269 "rw_mbytes_per_sec": 0, 00:08:16.269 "r_mbytes_per_sec": 0, 00:08:16.269 "w_mbytes_per_sec": 0 00:08:16.269 }, 00:08:16.269 "claimed": false, 00:08:16.269 "zoned": false, 00:08:16.269 "supported_io_types": { 00:08:16.269 "read": true, 00:08:16.269 "write": true, 00:08:16.269 "unmap": true, 00:08:16.269 "flush": true, 00:08:16.269 "reset": true, 00:08:16.269 "nvme_admin": false, 00:08:16.269 "nvme_io": false, 00:08:16.269 "nvme_io_md": false, 00:08:16.269 "write_zeroes": true, 00:08:16.269 "zcopy": false, 00:08:16.269 "get_zone_info": false, 00:08:16.269 "zone_management": false, 00:08:16.269 "zone_append": false, 00:08:16.269 "compare": false, 00:08:16.269 "compare_and_write": false, 00:08:16.269 "abort": false, 00:08:16.269 "seek_hole": false, 00:08:16.269 "seek_data": false, 00:08:16.269 "copy": false, 00:08:16.269 "nvme_iov_md": false 00:08:16.269 }, 00:08:16.269 "memory_domains": [ 00:08:16.269 { 00:08:16.269 "dma_device_id": "system", 00:08:16.269 "dma_device_type": 1 00:08:16.269 }, 00:08:16.269 { 00:08:16.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.269 "dma_device_type": 2 00:08:16.269 }, 00:08:16.269 { 00:08:16.269 "dma_device_id": "system", 00:08:16.269 "dma_device_type": 1 00:08:16.269 }, 00:08:16.269 { 00:08:16.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.269 "dma_device_type": 2 00:08:16.269 } 00:08:16.269 ], 00:08:16.269 "driver_specific": { 00:08:16.269 "raid": { 00:08:16.269 "uuid": "21be5cc1-95ee-4ca1-a7e8-a92c232714ff", 00:08:16.269 "strip_size_kb": 64, 00:08:16.269 "state": "online", 00:08:16.269 "raid_level": "concat", 00:08:16.269 "superblock": true, 00:08:16.269 "num_base_bdevs": 2, 00:08:16.269 "num_base_bdevs_discovered": 2, 00:08:16.269 "num_base_bdevs_operational": 2, 00:08:16.269 "base_bdevs_list": [ 00:08:16.269 { 00:08:16.269 "name": "BaseBdev1", 00:08:16.269 "uuid": "43ebe111-90ed-46cf-a931-b09156b020e4", 00:08:16.269 "is_configured": true, 00:08:16.269 "data_offset": 2048, 00:08:16.269 "data_size": 63488 00:08:16.269 }, 00:08:16.269 { 00:08:16.269 "name": "BaseBdev2", 00:08:16.269 "uuid": "1c987b14-a9c3-4220-973d-4df5cff82632", 00:08:16.269 "is_configured": true, 00:08:16.269 "data_offset": 2048, 00:08:16.269 "data_size": 63488 00:08:16.269 } 00:08:16.269 ] 00:08:16.269 } 00:08:16.269 } 00:08:16.269 }' 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.269 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.270 BaseBdev2' 00:08:16.270 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.529 [2024-12-13 08:19:28.791505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.529 [2024-12-13 08:19:28.791572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.529 [2024-12-13 08:19:28.791648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.529 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.530 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.789 "name": "Existed_Raid", 00:08:16.789 "uuid": "21be5cc1-95ee-4ca1-a7e8-a92c232714ff", 00:08:16.789 "strip_size_kb": 64, 00:08:16.789 "state": "offline", 00:08:16.789 "raid_level": "concat", 00:08:16.789 "superblock": true, 00:08:16.789 "num_base_bdevs": 2, 00:08:16.789 "num_base_bdevs_discovered": 1, 00:08:16.789 "num_base_bdevs_operational": 1, 00:08:16.789 "base_bdevs_list": [ 00:08:16.789 { 00:08:16.789 "name": null, 00:08:16.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.789 "is_configured": false, 00:08:16.789 "data_offset": 0, 00:08:16.789 "data_size": 63488 00:08:16.789 }, 00:08:16.789 { 00:08:16.789 "name": "BaseBdev2", 00:08:16.789 "uuid": "1c987b14-a9c3-4220-973d-4df5cff82632", 00:08:16.789 "is_configured": true, 00:08:16.789 "data_offset": 2048, 00:08:16.789 "data_size": 63488 00:08:16.789 } 00:08:16.789 ] 00:08:16.789 }' 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.789 08:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.048 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:17.048 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.048 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.048 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.048 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.048 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.049 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.049 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.049 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.049 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:17.049 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.049 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.049 [2024-12-13 08:19:29.393945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.049 [2024-12-13 08:19:29.394002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62107 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62107 ']' 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62107 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62107 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.308 killing process with pid 62107 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62107' 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62107 00:08:17.308 [2024-12-13 08:19:29.587831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.308 08:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62107 00:08:17.308 [2024-12-13 08:19:29.605772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.687 08:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:18.687 00:08:18.687 real 0m5.044s 00:08:18.687 user 0m7.237s 00:08:18.687 sys 0m0.848s 00:08:18.687 08:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.687 08:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.687 ************************************ 00:08:18.687 END TEST raid_state_function_test_sb 00:08:18.687 ************************************ 00:08:18.687 08:19:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:18.687 08:19:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:18.687 08:19:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.687 08:19:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.687 ************************************ 00:08:18.687 START TEST raid_superblock_test 00:08:18.687 ************************************ 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62359 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62359 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62359 ']' 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.687 08:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.687 [2024-12-13 08:19:30.896785] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:18.687 [2024-12-13 08:19:30.896919] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62359 ] 00:08:18.946 [2024-12-13 08:19:31.072333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.946 [2024-12-13 08:19:31.191986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.206 [2024-12-13 08:19:31.390462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.206 [2024-12-13 08:19:31.390524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.466 malloc1 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.466 [2024-12-13 08:19:31.807447] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.466 [2024-12-13 08:19:31.807506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.466 [2024-12-13 08:19:31.807530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:19.466 [2024-12-13 08:19:31.807541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.466 [2024-12-13 08:19:31.809903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.466 [2024-12-13 08:19:31.809937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.466 pt1 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.466 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.726 malloc2 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.726 [2024-12-13 08:19:31.862263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.726 [2024-12-13 08:19:31.862360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.726 [2024-12-13 08:19:31.862400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:19.726 [2024-12-13 08:19:31.862428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.726 [2024-12-13 08:19:31.864488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.726 [2024-12-13 08:19:31.864556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.726 pt2 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:19.726 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.727 [2024-12-13 08:19:31.874300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.727 [2024-12-13 08:19:31.876105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.727 [2024-12-13 08:19:31.876310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:19.727 [2024-12-13 08:19:31.876356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:19.727 [2024-12-13 08:19:31.876607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:19.727 [2024-12-13 08:19:31.876784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:19.727 [2024-12-13 08:19:31.876825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:19.727 [2024-12-13 08:19:31.876993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.727 "name": "raid_bdev1", 00:08:19.727 "uuid": "71e81991-8c3e-4348-8b49-f45de8a46144", 00:08:19.727 "strip_size_kb": 64, 00:08:19.727 "state": "online", 00:08:19.727 "raid_level": "concat", 00:08:19.727 "superblock": true, 00:08:19.727 "num_base_bdevs": 2, 00:08:19.727 "num_base_bdevs_discovered": 2, 00:08:19.727 "num_base_bdevs_operational": 2, 00:08:19.727 "base_bdevs_list": [ 00:08:19.727 { 00:08:19.727 "name": "pt1", 00:08:19.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.727 "is_configured": true, 00:08:19.727 "data_offset": 2048, 00:08:19.727 "data_size": 63488 00:08:19.727 }, 00:08:19.727 { 00:08:19.727 "name": "pt2", 00:08:19.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.727 "is_configured": true, 00:08:19.727 "data_offset": 2048, 00:08:19.727 "data_size": 63488 00:08:19.727 } 00:08:19.727 ] 00:08:19.727 }' 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.727 08:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.987 [2024-12-13 08:19:32.285884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.987 "name": "raid_bdev1", 00:08:19.987 "aliases": [ 00:08:19.987 "71e81991-8c3e-4348-8b49-f45de8a46144" 00:08:19.987 ], 00:08:19.987 "product_name": "Raid Volume", 00:08:19.987 "block_size": 512, 00:08:19.987 "num_blocks": 126976, 00:08:19.987 "uuid": "71e81991-8c3e-4348-8b49-f45de8a46144", 00:08:19.987 "assigned_rate_limits": { 00:08:19.987 "rw_ios_per_sec": 0, 00:08:19.987 "rw_mbytes_per_sec": 0, 00:08:19.987 "r_mbytes_per_sec": 0, 00:08:19.987 "w_mbytes_per_sec": 0 00:08:19.987 }, 00:08:19.987 "claimed": false, 00:08:19.987 "zoned": false, 00:08:19.987 "supported_io_types": { 00:08:19.987 "read": true, 00:08:19.987 "write": true, 00:08:19.987 "unmap": true, 00:08:19.987 "flush": true, 00:08:19.987 "reset": true, 00:08:19.987 "nvme_admin": false, 00:08:19.987 "nvme_io": false, 00:08:19.987 "nvme_io_md": false, 00:08:19.987 "write_zeroes": true, 00:08:19.987 "zcopy": false, 00:08:19.987 "get_zone_info": false, 00:08:19.987 "zone_management": false, 00:08:19.987 "zone_append": false, 00:08:19.987 "compare": false, 00:08:19.987 "compare_and_write": false, 00:08:19.987 "abort": false, 00:08:19.987 "seek_hole": false, 00:08:19.987 "seek_data": false, 00:08:19.987 "copy": false, 00:08:19.987 "nvme_iov_md": false 00:08:19.987 }, 00:08:19.987 "memory_domains": [ 00:08:19.987 { 00:08:19.987 "dma_device_id": "system", 00:08:19.987 "dma_device_type": 1 00:08:19.987 }, 00:08:19.987 { 00:08:19.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.987 "dma_device_type": 2 00:08:19.987 }, 00:08:19.987 { 00:08:19.987 "dma_device_id": "system", 00:08:19.987 "dma_device_type": 1 00:08:19.987 }, 00:08:19.987 { 00:08:19.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.987 "dma_device_type": 2 00:08:19.987 } 00:08:19.987 ], 00:08:19.987 "driver_specific": { 00:08:19.987 "raid": { 00:08:19.987 "uuid": "71e81991-8c3e-4348-8b49-f45de8a46144", 00:08:19.987 "strip_size_kb": 64, 00:08:19.987 "state": "online", 00:08:19.987 "raid_level": "concat", 00:08:19.987 "superblock": true, 00:08:19.987 "num_base_bdevs": 2, 00:08:19.987 "num_base_bdevs_discovered": 2, 00:08:19.987 "num_base_bdevs_operational": 2, 00:08:19.987 "base_bdevs_list": [ 00:08:19.987 { 00:08:19.987 "name": "pt1", 00:08:19.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.987 "is_configured": true, 00:08:19.987 "data_offset": 2048, 00:08:19.987 "data_size": 63488 00:08:19.987 }, 00:08:19.987 { 00:08:19.987 "name": "pt2", 00:08:19.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.987 "is_configured": true, 00:08:19.987 "data_offset": 2048, 00:08:19.987 "data_size": 63488 00:08:19.987 } 00:08:19.987 ] 00:08:19.987 } 00:08:19.987 } 00:08:19.987 }' 00:08:19.987 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.248 pt2' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 [2024-12-13 08:19:32.497532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=71e81991-8c3e-4348-8b49-f45de8a46144 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 71e81991-8c3e-4348-8b49-f45de8a46144 ']' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 [2024-12-13 08:19:32.541138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.248 [2024-12-13 08:19:32.541166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.248 [2024-12-13 08:19:32.541261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.248 [2024-12-13 08:19:32.541311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.248 [2024-12-13 08:19:32.541350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.248 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.508 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 [2024-12-13 08:19:32.684947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:20.509 [2024-12-13 08:19:32.687042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:20.509 [2024-12-13 08:19:32.687129] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:20.509 [2024-12-13 08:19:32.687182] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:20.509 [2024-12-13 08:19:32.687198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.509 [2024-12-13 08:19:32.687209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:20.509 request: 00:08:20.509 { 00:08:20.509 "name": "raid_bdev1", 00:08:20.509 "raid_level": "concat", 00:08:20.509 "base_bdevs": [ 00:08:20.509 "malloc1", 00:08:20.509 "malloc2" 00:08:20.509 ], 00:08:20.509 "strip_size_kb": 64, 00:08:20.509 "superblock": false, 00:08:20.509 "method": "bdev_raid_create", 00:08:20.509 "req_id": 1 00:08:20.509 } 00:08:20.509 Got JSON-RPC error response 00:08:20.509 response: 00:08:20.509 { 00:08:20.509 "code": -17, 00:08:20.509 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:20.509 } 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 [2024-12-13 08:19:32.740829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.509 [2024-12-13 08:19:32.740971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.509 [2024-12-13 08:19:32.741016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:20.509 [2024-12-13 08:19:32.741051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.509 [2024-12-13 08:19:32.743442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.509 [2024-12-13 08:19:32.743540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.509 [2024-12-13 08:19:32.743691] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:20.509 [2024-12-13 08:19:32.743786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:20.509 pt1 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.509 "name": "raid_bdev1", 00:08:20.509 "uuid": "71e81991-8c3e-4348-8b49-f45de8a46144", 00:08:20.509 "strip_size_kb": 64, 00:08:20.509 "state": "configuring", 00:08:20.509 "raid_level": "concat", 00:08:20.509 "superblock": true, 00:08:20.509 "num_base_bdevs": 2, 00:08:20.509 "num_base_bdevs_discovered": 1, 00:08:20.509 "num_base_bdevs_operational": 2, 00:08:20.509 "base_bdevs_list": [ 00:08:20.509 { 00:08:20.509 "name": "pt1", 00:08:20.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.509 "is_configured": true, 00:08:20.509 "data_offset": 2048, 00:08:20.509 "data_size": 63488 00:08:20.509 }, 00:08:20.509 { 00:08:20.509 "name": null, 00:08:20.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.509 "is_configured": false, 00:08:20.509 "data_offset": 2048, 00:08:20.509 "data_size": 63488 00:08:20.509 } 00:08:20.509 ] 00:08:20.509 }' 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.509 08:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.769 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:20.769 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:20.769 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.769 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.769 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.769 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.029 [2024-12-13 08:19:33.136162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.029 [2024-12-13 08:19:33.136333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.029 [2024-12-13 08:19:33.136361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:21.029 [2024-12-13 08:19:33.136372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.029 [2024-12-13 08:19:33.136819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.029 [2024-12-13 08:19:33.136841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.029 [2024-12-13 08:19:33.136928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:21.029 [2024-12-13 08:19:33.136955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.029 [2024-12-13 08:19:33.137083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.029 [2024-12-13 08:19:33.137094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:21.029 [2024-12-13 08:19:33.137341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:21.029 [2024-12-13 08:19:33.137480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.029 [2024-12-13 08:19:33.137489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:21.029 [2024-12-13 08:19:33.137629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.029 pt2 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.029 "name": "raid_bdev1", 00:08:21.029 "uuid": "71e81991-8c3e-4348-8b49-f45de8a46144", 00:08:21.029 "strip_size_kb": 64, 00:08:21.029 "state": "online", 00:08:21.029 "raid_level": "concat", 00:08:21.029 "superblock": true, 00:08:21.029 "num_base_bdevs": 2, 00:08:21.029 "num_base_bdevs_discovered": 2, 00:08:21.029 "num_base_bdevs_operational": 2, 00:08:21.029 "base_bdevs_list": [ 00:08:21.029 { 00:08:21.029 "name": "pt1", 00:08:21.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.029 "is_configured": true, 00:08:21.029 "data_offset": 2048, 00:08:21.029 "data_size": 63488 00:08:21.029 }, 00:08:21.029 { 00:08:21.029 "name": "pt2", 00:08:21.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.029 "is_configured": true, 00:08:21.029 "data_offset": 2048, 00:08:21.029 "data_size": 63488 00:08:21.029 } 00:08:21.029 ] 00:08:21.029 }' 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.029 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.289 [2024-12-13 08:19:33.599619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.289 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.289 "name": "raid_bdev1", 00:08:21.289 "aliases": [ 00:08:21.289 "71e81991-8c3e-4348-8b49-f45de8a46144" 00:08:21.289 ], 00:08:21.289 "product_name": "Raid Volume", 00:08:21.289 "block_size": 512, 00:08:21.289 "num_blocks": 126976, 00:08:21.289 "uuid": "71e81991-8c3e-4348-8b49-f45de8a46144", 00:08:21.289 "assigned_rate_limits": { 00:08:21.289 "rw_ios_per_sec": 0, 00:08:21.289 "rw_mbytes_per_sec": 0, 00:08:21.289 "r_mbytes_per_sec": 0, 00:08:21.289 "w_mbytes_per_sec": 0 00:08:21.289 }, 00:08:21.289 "claimed": false, 00:08:21.289 "zoned": false, 00:08:21.289 "supported_io_types": { 00:08:21.289 "read": true, 00:08:21.289 "write": true, 00:08:21.289 "unmap": true, 00:08:21.289 "flush": true, 00:08:21.289 "reset": true, 00:08:21.289 "nvme_admin": false, 00:08:21.289 "nvme_io": false, 00:08:21.289 "nvme_io_md": false, 00:08:21.289 "write_zeroes": true, 00:08:21.289 "zcopy": false, 00:08:21.289 "get_zone_info": false, 00:08:21.289 "zone_management": false, 00:08:21.289 "zone_append": false, 00:08:21.289 "compare": false, 00:08:21.289 "compare_and_write": false, 00:08:21.289 "abort": false, 00:08:21.289 "seek_hole": false, 00:08:21.289 "seek_data": false, 00:08:21.289 "copy": false, 00:08:21.289 "nvme_iov_md": false 00:08:21.289 }, 00:08:21.289 "memory_domains": [ 00:08:21.289 { 00:08:21.289 "dma_device_id": "system", 00:08:21.289 "dma_device_type": 1 00:08:21.289 }, 00:08:21.289 { 00:08:21.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.289 "dma_device_type": 2 00:08:21.289 }, 00:08:21.289 { 00:08:21.289 "dma_device_id": "system", 00:08:21.289 "dma_device_type": 1 00:08:21.289 }, 00:08:21.289 { 00:08:21.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.289 "dma_device_type": 2 00:08:21.289 } 00:08:21.289 ], 00:08:21.289 "driver_specific": { 00:08:21.289 "raid": { 00:08:21.289 "uuid": "71e81991-8c3e-4348-8b49-f45de8a46144", 00:08:21.289 "strip_size_kb": 64, 00:08:21.289 "state": "online", 00:08:21.289 "raid_level": "concat", 00:08:21.289 "superblock": true, 00:08:21.289 "num_base_bdevs": 2, 00:08:21.289 "num_base_bdevs_discovered": 2, 00:08:21.289 "num_base_bdevs_operational": 2, 00:08:21.289 "base_bdevs_list": [ 00:08:21.289 { 00:08:21.289 "name": "pt1", 00:08:21.289 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.289 "is_configured": true, 00:08:21.289 "data_offset": 2048, 00:08:21.289 "data_size": 63488 00:08:21.289 }, 00:08:21.289 { 00:08:21.289 "name": "pt2", 00:08:21.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.289 "is_configured": true, 00:08:21.289 "data_offset": 2048, 00:08:21.289 "data_size": 63488 00:08:21.289 } 00:08:21.290 ] 00:08:21.290 } 00:08:21.290 } 00:08:21.290 }' 00:08:21.290 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:21.549 pt2' 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.549 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.550 [2024-12-13 08:19:33.811323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 71e81991-8c3e-4348-8b49-f45de8a46144 '!=' 71e81991-8c3e-4348-8b49-f45de8a46144 ']' 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62359 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62359 ']' 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62359 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62359 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62359' 00:08:21.550 killing process with pid 62359 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62359 00:08:21.550 [2024-12-13 08:19:33.898805] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.550 [2024-12-13 08:19:33.898993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.550 08:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62359 00:08:21.550 [2024-12-13 08:19:33.899093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.550 [2024-12-13 08:19:33.899158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.810 [2024-12-13 08:19:34.105517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.190 08:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:23.190 00:08:23.190 real 0m4.424s 00:08:23.190 user 0m6.213s 00:08:23.190 sys 0m0.725s 00:08:23.190 08:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.190 08:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.190 ************************************ 00:08:23.190 END TEST raid_superblock_test 00:08:23.190 ************************************ 00:08:23.190 08:19:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:23.190 08:19:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:23.190 08:19:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.190 08:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.190 ************************************ 00:08:23.190 START TEST raid_read_error_test 00:08:23.190 ************************************ 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:23.190 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JTlxMQfpAG 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62565 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62565 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62565 ']' 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.191 08:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.191 [2024-12-13 08:19:35.402091] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:23.191 [2024-12-13 08:19:35.402306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62565 ] 00:08:23.453 [2024-12-13 08:19:35.575428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.453 [2024-12-13 08:19:35.690767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.715 [2024-12-13 08:19:35.891096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.715 [2024-12-13 08:19:35.891134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.975 BaseBdev1_malloc 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.975 true 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.975 [2024-12-13 08:19:36.303959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.975 [2024-12-13 08:19:36.304024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.975 [2024-12-13 08:19:36.304047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:23.975 [2024-12-13 08:19:36.304058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.975 [2024-12-13 08:19:36.306201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.975 [2024-12-13 08:19:36.306241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.975 BaseBdev1 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.975 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.235 BaseBdev2_malloc 00:08:24.235 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.235 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:24.235 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.236 true 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.236 [2024-12-13 08:19:36.370310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:24.236 [2024-12-13 08:19:36.370369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.236 [2024-12-13 08:19:36.370387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:24.236 [2024-12-13 08:19:36.370397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.236 [2024-12-13 08:19:36.372501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.236 [2024-12-13 08:19:36.372623] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:24.236 BaseBdev2 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.236 [2024-12-13 08:19:36.382363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.236 [2024-12-13 08:19:36.384209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.236 [2024-12-13 08:19:36.384398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:24.236 [2024-12-13 08:19:36.384415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:24.236 [2024-12-13 08:19:36.384656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:24.236 [2024-12-13 08:19:36.384828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:24.236 [2024-12-13 08:19:36.384851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:24.236 [2024-12-13 08:19:36.385015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.236 "name": "raid_bdev1", 00:08:24.236 "uuid": "f4ced814-6dac-4c2f-9a25-f8da096c9b81", 00:08:24.236 "strip_size_kb": 64, 00:08:24.236 "state": "online", 00:08:24.236 "raid_level": "concat", 00:08:24.236 "superblock": true, 00:08:24.236 "num_base_bdevs": 2, 00:08:24.236 "num_base_bdevs_discovered": 2, 00:08:24.236 "num_base_bdevs_operational": 2, 00:08:24.236 "base_bdevs_list": [ 00:08:24.236 { 00:08:24.236 "name": "BaseBdev1", 00:08:24.236 "uuid": "870f8706-b7c7-5bf5-a104-f5f711377d7c", 00:08:24.236 "is_configured": true, 00:08:24.236 "data_offset": 2048, 00:08:24.236 "data_size": 63488 00:08:24.236 }, 00:08:24.236 { 00:08:24.236 "name": "BaseBdev2", 00:08:24.236 "uuid": "9461ef6b-a4ff-5532-ad03-3f2bdaad9366", 00:08:24.236 "is_configured": true, 00:08:24.236 "data_offset": 2048, 00:08:24.236 "data_size": 63488 00:08:24.236 } 00:08:24.236 ] 00:08:24.236 }' 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.236 08:19:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:24.496 08:19:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:24.755 [2024-12-13 08:19:36.930964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.693 "name": "raid_bdev1", 00:08:25.693 "uuid": "f4ced814-6dac-4c2f-9a25-f8da096c9b81", 00:08:25.693 "strip_size_kb": 64, 00:08:25.693 "state": "online", 00:08:25.693 "raid_level": "concat", 00:08:25.693 "superblock": true, 00:08:25.693 "num_base_bdevs": 2, 00:08:25.693 "num_base_bdevs_discovered": 2, 00:08:25.693 "num_base_bdevs_operational": 2, 00:08:25.693 "base_bdevs_list": [ 00:08:25.693 { 00:08:25.693 "name": "BaseBdev1", 00:08:25.693 "uuid": "870f8706-b7c7-5bf5-a104-f5f711377d7c", 00:08:25.693 "is_configured": true, 00:08:25.693 "data_offset": 2048, 00:08:25.693 "data_size": 63488 00:08:25.693 }, 00:08:25.693 { 00:08:25.693 "name": "BaseBdev2", 00:08:25.693 "uuid": "9461ef6b-a4ff-5532-ad03-3f2bdaad9366", 00:08:25.693 "is_configured": true, 00:08:25.693 "data_offset": 2048, 00:08:25.693 "data_size": 63488 00:08:25.693 } 00:08:25.693 ] 00:08:25.693 }' 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.693 08:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.260 [2024-12-13 08:19:38.370569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.260 [2024-12-13 08:19:38.370612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.260 [2024-12-13 08:19:38.373301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.260 [2024-12-13 08:19:38.373342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.260 [2024-12-13 08:19:38.373373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.260 [2024-12-13 08:19:38.373387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:26.260 { 00:08:26.260 "results": [ 00:08:26.260 { 00:08:26.260 "job": "raid_bdev1", 00:08:26.260 "core_mask": "0x1", 00:08:26.260 "workload": "randrw", 00:08:26.260 "percentage": 50, 00:08:26.260 "status": "finished", 00:08:26.260 "queue_depth": 1, 00:08:26.260 "io_size": 131072, 00:08:26.260 "runtime": 1.440216, 00:08:26.260 "iops": 15013.025823904192, 00:08:26.260 "mibps": 1876.628227988024, 00:08:26.260 "io_failed": 1, 00:08:26.260 "io_timeout": 0, 00:08:26.260 "avg_latency_us": 92.20586699388306, 00:08:26.260 "min_latency_us": 26.382532751091702, 00:08:26.260 "max_latency_us": 1416.6078602620087 00:08:26.260 } 00:08:26.260 ], 00:08:26.260 "core_count": 1 00:08:26.260 } 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62565 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62565 ']' 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62565 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62565 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62565' 00:08:26.260 killing process with pid 62565 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62565 00:08:26.260 [2024-12-13 08:19:38.413380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.260 08:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62565 00:08:26.260 [2024-12-13 08:19:38.547151] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JTlxMQfpAG 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:08:27.637 00:08:27.637 real 0m4.443s 00:08:27.637 user 0m5.349s 00:08:27.637 sys 0m0.561s 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.637 08:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.637 ************************************ 00:08:27.637 END TEST raid_read_error_test 00:08:27.637 ************************************ 00:08:27.637 08:19:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:27.637 08:19:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:27.637 08:19:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.637 08:19:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.637 ************************************ 00:08:27.637 START TEST raid_write_error_test 00:08:27.637 ************************************ 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7K0PNNDQBW 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62705 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62705 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62705 ']' 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.637 08:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.637 [2024-12-13 08:19:39.911707] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:27.637 [2024-12-13 08:19:39.911905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62705 ] 00:08:27.896 [2024-12-13 08:19:40.083549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.896 [2024-12-13 08:19:40.207015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.155 [2024-12-13 08:19:40.404084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.155 [2024-12-13 08:19:40.404163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.415 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.415 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:28.415 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.415 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:28.415 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.415 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 BaseBdev1_malloc 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 true 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 [2024-12-13 08:19:40.824796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:28.675 [2024-12-13 08:19:40.824875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.675 [2024-12-13 08:19:40.824898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:28.675 [2024-12-13 08:19:40.824908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.675 [2024-12-13 08:19:40.827245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.675 [2024-12-13 08:19:40.827355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:28.675 BaseBdev1 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 BaseBdev2_malloc 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 true 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 [2024-12-13 08:19:40.884551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:28.675 [2024-12-13 08:19:40.884621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.675 [2024-12-13 08:19:40.884642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:28.675 [2024-12-13 08:19:40.884655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.675 [2024-12-13 08:19:40.887047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.675 [2024-12-13 08:19:40.887093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:28.675 BaseBdev2 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 [2024-12-13 08:19:40.892614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.675 [2024-12-13 08:19:40.894587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.675 [2024-12-13 08:19:40.894778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.675 [2024-12-13 08:19:40.894795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:28.675 [2024-12-13 08:19:40.895078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:28.675 [2024-12-13 08:19:40.895294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.675 [2024-12-13 08:19:40.895310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:28.675 [2024-12-13 08:19:40.895503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.675 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.675 "name": "raid_bdev1", 00:08:28.675 "uuid": "089e2cf9-13e5-483c-95d5-4ffd8033953a", 00:08:28.675 "strip_size_kb": 64, 00:08:28.675 "state": "online", 00:08:28.676 "raid_level": "concat", 00:08:28.676 "superblock": true, 00:08:28.676 "num_base_bdevs": 2, 00:08:28.676 "num_base_bdevs_discovered": 2, 00:08:28.676 "num_base_bdevs_operational": 2, 00:08:28.676 "base_bdevs_list": [ 00:08:28.676 { 00:08:28.676 "name": "BaseBdev1", 00:08:28.676 "uuid": "ac6e8bc9-9201-5e15-a210-de0374060031", 00:08:28.676 "is_configured": true, 00:08:28.676 "data_offset": 2048, 00:08:28.676 "data_size": 63488 00:08:28.676 }, 00:08:28.676 { 00:08:28.676 "name": "BaseBdev2", 00:08:28.676 "uuid": "3a9d46a7-90b1-5684-bb99-d9dfe8bbcef7", 00:08:28.676 "is_configured": true, 00:08:28.676 "data_offset": 2048, 00:08:28.676 "data_size": 63488 00:08:28.676 } 00:08:28.676 ] 00:08:28.676 }' 00:08:28.676 08:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.676 08:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.246 08:19:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:29.246 08:19:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:29.246 [2024-12-13 08:19:41.464904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.186 "name": "raid_bdev1", 00:08:30.186 "uuid": "089e2cf9-13e5-483c-95d5-4ffd8033953a", 00:08:30.186 "strip_size_kb": 64, 00:08:30.186 "state": "online", 00:08:30.186 "raid_level": "concat", 00:08:30.186 "superblock": true, 00:08:30.186 "num_base_bdevs": 2, 00:08:30.186 "num_base_bdevs_discovered": 2, 00:08:30.186 "num_base_bdevs_operational": 2, 00:08:30.186 "base_bdevs_list": [ 00:08:30.186 { 00:08:30.186 "name": "BaseBdev1", 00:08:30.186 "uuid": "ac6e8bc9-9201-5e15-a210-de0374060031", 00:08:30.186 "is_configured": true, 00:08:30.186 "data_offset": 2048, 00:08:30.186 "data_size": 63488 00:08:30.186 }, 00:08:30.186 { 00:08:30.186 "name": "BaseBdev2", 00:08:30.186 "uuid": "3a9d46a7-90b1-5684-bb99-d9dfe8bbcef7", 00:08:30.186 "is_configured": true, 00:08:30.186 "data_offset": 2048, 00:08:30.186 "data_size": 63488 00:08:30.186 } 00:08:30.186 ] 00:08:30.186 }' 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.186 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.756 [2024-12-13 08:19:42.820824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.756 [2024-12-13 08:19:42.820928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.756 [2024-12-13 08:19:42.823867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.756 [2024-12-13 08:19:42.823958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.756 [2024-12-13 08:19:42.824021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.756 [2024-12-13 08:19:42.824072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:30.756 { 00:08:30.756 "results": [ 00:08:30.756 { 00:08:30.756 "job": "raid_bdev1", 00:08:30.756 "core_mask": "0x1", 00:08:30.756 "workload": "randrw", 00:08:30.756 "percentage": 50, 00:08:30.756 "status": "finished", 00:08:30.756 "queue_depth": 1, 00:08:30.756 "io_size": 131072, 00:08:30.756 "runtime": 1.356833, 00:08:30.756 "iops": 15231.79344841996, 00:08:30.756 "mibps": 1903.974181052495, 00:08:30.756 "io_failed": 1, 00:08:30.756 "io_timeout": 0, 00:08:30.756 "avg_latency_us": 90.84748677997672, 00:08:30.756 "min_latency_us": 26.941484716157206, 00:08:30.756 "max_latency_us": 1502.46288209607 00:08:30.756 } 00:08:30.756 ], 00:08:30.756 "core_count": 1 00:08:30.756 } 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62705 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62705 ']' 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62705 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62705 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62705' 00:08:30.756 killing process with pid 62705 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62705 00:08:30.756 [2024-12-13 08:19:42.868210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.756 08:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62705 00:08:30.756 [2024-12-13 08:19:43.004187] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7K0PNNDQBW 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.136 ************************************ 00:08:32.136 END TEST raid_write_error_test 00:08:32.136 ************************************ 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:32.136 00:08:32.136 real 0m4.406s 00:08:32.136 user 0m5.288s 00:08:32.136 sys 0m0.565s 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.136 08:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.136 08:19:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:32.136 08:19:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:32.136 08:19:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:32.136 08:19:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.136 08:19:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.136 ************************************ 00:08:32.136 START TEST raid_state_function_test 00:08:32.136 ************************************ 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62849 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62849' 00:08:32.136 Process raid pid: 62849 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62849 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62849 ']' 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.136 08:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.136 [2024-12-13 08:19:44.386268] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:32.136 [2024-12-13 08:19:44.386376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.395 [2024-12-13 08:19:44.560274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.395 [2024-12-13 08:19:44.682036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.653 [2024-12-13 08:19:44.891896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.653 [2024-12-13 08:19:44.891947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.986 [2024-12-13 08:19:45.247678] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.986 [2024-12-13 08:19:45.247809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.986 [2024-12-13 08:19:45.247840] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.986 [2024-12-13 08:19:45.247852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.986 "name": "Existed_Raid", 00:08:32.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.986 "strip_size_kb": 0, 00:08:32.986 "state": "configuring", 00:08:32.986 "raid_level": "raid1", 00:08:32.986 "superblock": false, 00:08:32.986 "num_base_bdevs": 2, 00:08:32.986 "num_base_bdevs_discovered": 0, 00:08:32.986 "num_base_bdevs_operational": 2, 00:08:32.986 "base_bdevs_list": [ 00:08:32.986 { 00:08:32.986 "name": "BaseBdev1", 00:08:32.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.986 "is_configured": false, 00:08:32.986 "data_offset": 0, 00:08:32.986 "data_size": 0 00:08:32.986 }, 00:08:32.986 { 00:08:32.986 "name": "BaseBdev2", 00:08:32.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.986 "is_configured": false, 00:08:32.986 "data_offset": 0, 00:08:32.986 "data_size": 0 00:08:32.986 } 00:08:32.986 ] 00:08:32.986 }' 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.986 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 [2024-12-13 08:19:45.714866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.567 [2024-12-13 08:19:45.714985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 [2024-12-13 08:19:45.726824] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.567 [2024-12-13 08:19:45.726921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.567 [2024-12-13 08:19:45.726979] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.567 [2024-12-13 08:19:45.727009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 [2024-12-13 08:19:45.775351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.567 BaseBdev1 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.567 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 [ 00:08:33.567 { 00:08:33.567 "name": "BaseBdev1", 00:08:33.567 "aliases": [ 00:08:33.567 "785c2f41-e56c-4ff6-814f-27b5a6390f83" 00:08:33.567 ], 00:08:33.567 "product_name": "Malloc disk", 00:08:33.567 "block_size": 512, 00:08:33.567 "num_blocks": 65536, 00:08:33.568 "uuid": "785c2f41-e56c-4ff6-814f-27b5a6390f83", 00:08:33.568 "assigned_rate_limits": { 00:08:33.568 "rw_ios_per_sec": 0, 00:08:33.568 "rw_mbytes_per_sec": 0, 00:08:33.568 "r_mbytes_per_sec": 0, 00:08:33.568 "w_mbytes_per_sec": 0 00:08:33.568 }, 00:08:33.568 "claimed": true, 00:08:33.568 "claim_type": "exclusive_write", 00:08:33.568 "zoned": false, 00:08:33.568 "supported_io_types": { 00:08:33.568 "read": true, 00:08:33.568 "write": true, 00:08:33.568 "unmap": true, 00:08:33.568 "flush": true, 00:08:33.568 "reset": true, 00:08:33.568 "nvme_admin": false, 00:08:33.568 "nvme_io": false, 00:08:33.568 "nvme_io_md": false, 00:08:33.568 "write_zeroes": true, 00:08:33.568 "zcopy": true, 00:08:33.568 "get_zone_info": false, 00:08:33.568 "zone_management": false, 00:08:33.568 "zone_append": false, 00:08:33.568 "compare": false, 00:08:33.568 "compare_and_write": false, 00:08:33.568 "abort": true, 00:08:33.568 "seek_hole": false, 00:08:33.568 "seek_data": false, 00:08:33.568 "copy": true, 00:08:33.568 "nvme_iov_md": false 00:08:33.568 }, 00:08:33.568 "memory_domains": [ 00:08:33.568 { 00:08:33.568 "dma_device_id": "system", 00:08:33.568 "dma_device_type": 1 00:08:33.568 }, 00:08:33.568 { 00:08:33.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.568 "dma_device_type": 2 00:08:33.568 } 00:08:33.568 ], 00:08:33.568 "driver_specific": {} 00:08:33.568 } 00:08:33.568 ] 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.568 "name": "Existed_Raid", 00:08:33.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.568 "strip_size_kb": 0, 00:08:33.568 "state": "configuring", 00:08:33.568 "raid_level": "raid1", 00:08:33.568 "superblock": false, 00:08:33.568 "num_base_bdevs": 2, 00:08:33.568 "num_base_bdevs_discovered": 1, 00:08:33.568 "num_base_bdevs_operational": 2, 00:08:33.568 "base_bdevs_list": [ 00:08:33.568 { 00:08:33.568 "name": "BaseBdev1", 00:08:33.568 "uuid": "785c2f41-e56c-4ff6-814f-27b5a6390f83", 00:08:33.568 "is_configured": true, 00:08:33.568 "data_offset": 0, 00:08:33.568 "data_size": 65536 00:08:33.568 }, 00:08:33.568 { 00:08:33.568 "name": "BaseBdev2", 00:08:33.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.568 "is_configured": false, 00:08:33.568 "data_offset": 0, 00:08:33.568 "data_size": 0 00:08:33.568 } 00:08:33.568 ] 00:08:33.568 }' 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.568 08:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.135 [2024-12-13 08:19:46.286584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.135 [2024-12-13 08:19:46.286648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.135 [2024-12-13 08:19:46.298623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.135 [2024-12-13 08:19:46.300772] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.135 [2024-12-13 08:19:46.300822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.135 "name": "Existed_Raid", 00:08:34.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.135 "strip_size_kb": 0, 00:08:34.135 "state": "configuring", 00:08:34.135 "raid_level": "raid1", 00:08:34.135 "superblock": false, 00:08:34.135 "num_base_bdevs": 2, 00:08:34.135 "num_base_bdevs_discovered": 1, 00:08:34.135 "num_base_bdevs_operational": 2, 00:08:34.135 "base_bdevs_list": [ 00:08:34.135 { 00:08:34.135 "name": "BaseBdev1", 00:08:34.135 "uuid": "785c2f41-e56c-4ff6-814f-27b5a6390f83", 00:08:34.135 "is_configured": true, 00:08:34.135 "data_offset": 0, 00:08:34.135 "data_size": 65536 00:08:34.135 }, 00:08:34.135 { 00:08:34.135 "name": "BaseBdev2", 00:08:34.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.135 "is_configured": false, 00:08:34.135 "data_offset": 0, 00:08:34.135 "data_size": 0 00:08:34.135 } 00:08:34.135 ] 00:08:34.135 }' 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.135 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.394 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.394 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.394 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.394 [2024-12-13 08:19:46.756731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.394 [2024-12-13 08:19:46.756904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:34.394 [2024-12-13 08:19:46.756930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:34.394 [2024-12-13 08:19:46.757283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.394 [2024-12-13 08:19:46.757544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:34.394 [2024-12-13 08:19:46.757593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:34.394 [2024-12-13 08:19:46.757901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.653 BaseBdev2 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.653 [ 00:08:34.653 { 00:08:34.653 "name": "BaseBdev2", 00:08:34.653 "aliases": [ 00:08:34.653 "eeddc02d-ac0a-43cf-a790-42b4178c1026" 00:08:34.653 ], 00:08:34.653 "product_name": "Malloc disk", 00:08:34.653 "block_size": 512, 00:08:34.653 "num_blocks": 65536, 00:08:34.653 "uuid": "eeddc02d-ac0a-43cf-a790-42b4178c1026", 00:08:34.653 "assigned_rate_limits": { 00:08:34.653 "rw_ios_per_sec": 0, 00:08:34.653 "rw_mbytes_per_sec": 0, 00:08:34.653 "r_mbytes_per_sec": 0, 00:08:34.653 "w_mbytes_per_sec": 0 00:08:34.653 }, 00:08:34.653 "claimed": true, 00:08:34.653 "claim_type": "exclusive_write", 00:08:34.653 "zoned": false, 00:08:34.653 "supported_io_types": { 00:08:34.653 "read": true, 00:08:34.653 "write": true, 00:08:34.653 "unmap": true, 00:08:34.653 "flush": true, 00:08:34.653 "reset": true, 00:08:34.653 "nvme_admin": false, 00:08:34.653 "nvme_io": false, 00:08:34.653 "nvme_io_md": false, 00:08:34.653 "write_zeroes": true, 00:08:34.653 "zcopy": true, 00:08:34.653 "get_zone_info": false, 00:08:34.653 "zone_management": false, 00:08:34.653 "zone_append": false, 00:08:34.653 "compare": false, 00:08:34.653 "compare_and_write": false, 00:08:34.653 "abort": true, 00:08:34.653 "seek_hole": false, 00:08:34.653 "seek_data": false, 00:08:34.653 "copy": true, 00:08:34.653 "nvme_iov_md": false 00:08:34.653 }, 00:08:34.653 "memory_domains": [ 00:08:34.653 { 00:08:34.653 "dma_device_id": "system", 00:08:34.653 "dma_device_type": 1 00:08:34.653 }, 00:08:34.653 { 00:08:34.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.653 "dma_device_type": 2 00:08:34.653 } 00:08:34.653 ], 00:08:34.653 "driver_specific": {} 00:08:34.653 } 00:08:34.653 ] 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.653 "name": "Existed_Raid", 00:08:34.653 "uuid": "ff85ebba-b923-412a-bd74-00199c4fe257", 00:08:34.653 "strip_size_kb": 0, 00:08:34.653 "state": "online", 00:08:34.653 "raid_level": "raid1", 00:08:34.653 "superblock": false, 00:08:34.653 "num_base_bdevs": 2, 00:08:34.653 "num_base_bdevs_discovered": 2, 00:08:34.653 "num_base_bdevs_operational": 2, 00:08:34.653 "base_bdevs_list": [ 00:08:34.653 { 00:08:34.653 "name": "BaseBdev1", 00:08:34.653 "uuid": "785c2f41-e56c-4ff6-814f-27b5a6390f83", 00:08:34.653 "is_configured": true, 00:08:34.653 "data_offset": 0, 00:08:34.653 "data_size": 65536 00:08:34.653 }, 00:08:34.653 { 00:08:34.653 "name": "BaseBdev2", 00:08:34.653 "uuid": "eeddc02d-ac0a-43cf-a790-42b4178c1026", 00:08:34.653 "is_configured": true, 00:08:34.653 "data_offset": 0, 00:08:34.653 "data_size": 65536 00:08:34.653 } 00:08:34.653 ] 00:08:34.653 }' 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.653 08:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.912 [2024-12-13 08:19:47.232354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.912 "name": "Existed_Raid", 00:08:34.912 "aliases": [ 00:08:34.912 "ff85ebba-b923-412a-bd74-00199c4fe257" 00:08:34.912 ], 00:08:34.912 "product_name": "Raid Volume", 00:08:34.912 "block_size": 512, 00:08:34.912 "num_blocks": 65536, 00:08:34.912 "uuid": "ff85ebba-b923-412a-bd74-00199c4fe257", 00:08:34.912 "assigned_rate_limits": { 00:08:34.912 "rw_ios_per_sec": 0, 00:08:34.912 "rw_mbytes_per_sec": 0, 00:08:34.912 "r_mbytes_per_sec": 0, 00:08:34.912 "w_mbytes_per_sec": 0 00:08:34.912 }, 00:08:34.912 "claimed": false, 00:08:34.912 "zoned": false, 00:08:34.912 "supported_io_types": { 00:08:34.912 "read": true, 00:08:34.912 "write": true, 00:08:34.912 "unmap": false, 00:08:34.912 "flush": false, 00:08:34.912 "reset": true, 00:08:34.912 "nvme_admin": false, 00:08:34.912 "nvme_io": false, 00:08:34.912 "nvme_io_md": false, 00:08:34.912 "write_zeroes": true, 00:08:34.912 "zcopy": false, 00:08:34.912 "get_zone_info": false, 00:08:34.912 "zone_management": false, 00:08:34.912 "zone_append": false, 00:08:34.912 "compare": false, 00:08:34.912 "compare_and_write": false, 00:08:34.912 "abort": false, 00:08:34.912 "seek_hole": false, 00:08:34.912 "seek_data": false, 00:08:34.912 "copy": false, 00:08:34.912 "nvme_iov_md": false 00:08:34.912 }, 00:08:34.912 "memory_domains": [ 00:08:34.912 { 00:08:34.912 "dma_device_id": "system", 00:08:34.912 "dma_device_type": 1 00:08:34.912 }, 00:08:34.912 { 00:08:34.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.912 "dma_device_type": 2 00:08:34.912 }, 00:08:34.912 { 00:08:34.912 "dma_device_id": "system", 00:08:34.912 "dma_device_type": 1 00:08:34.912 }, 00:08:34.912 { 00:08:34.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.912 "dma_device_type": 2 00:08:34.912 } 00:08:34.912 ], 00:08:34.912 "driver_specific": { 00:08:34.912 "raid": { 00:08:34.912 "uuid": "ff85ebba-b923-412a-bd74-00199c4fe257", 00:08:34.912 "strip_size_kb": 0, 00:08:34.912 "state": "online", 00:08:34.912 "raid_level": "raid1", 00:08:34.912 "superblock": false, 00:08:34.912 "num_base_bdevs": 2, 00:08:34.912 "num_base_bdevs_discovered": 2, 00:08:34.912 "num_base_bdevs_operational": 2, 00:08:34.912 "base_bdevs_list": [ 00:08:34.912 { 00:08:34.912 "name": "BaseBdev1", 00:08:34.912 "uuid": "785c2f41-e56c-4ff6-814f-27b5a6390f83", 00:08:34.912 "is_configured": true, 00:08:34.912 "data_offset": 0, 00:08:34.912 "data_size": 65536 00:08:34.912 }, 00:08:34.912 { 00:08:34.912 "name": "BaseBdev2", 00:08:34.912 "uuid": "eeddc02d-ac0a-43cf-a790-42b4178c1026", 00:08:34.912 "is_configured": true, 00:08:34.912 "data_offset": 0, 00:08:34.912 "data_size": 65536 00:08:34.912 } 00:08:34.912 ] 00:08:34.912 } 00:08:34.912 } 00:08:34.912 }' 00:08:34.912 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.171 BaseBdev2' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.171 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.171 [2024-12-13 08:19:47.475646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.431 "name": "Existed_Raid", 00:08:35.431 "uuid": "ff85ebba-b923-412a-bd74-00199c4fe257", 00:08:35.431 "strip_size_kb": 0, 00:08:35.431 "state": "online", 00:08:35.431 "raid_level": "raid1", 00:08:35.431 "superblock": false, 00:08:35.431 "num_base_bdevs": 2, 00:08:35.431 "num_base_bdevs_discovered": 1, 00:08:35.431 "num_base_bdevs_operational": 1, 00:08:35.431 "base_bdevs_list": [ 00:08:35.431 { 00:08:35.431 "name": null, 00:08:35.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.431 "is_configured": false, 00:08:35.431 "data_offset": 0, 00:08:35.431 "data_size": 65536 00:08:35.431 }, 00:08:35.431 { 00:08:35.431 "name": "BaseBdev2", 00:08:35.431 "uuid": "eeddc02d-ac0a-43cf-a790-42b4178c1026", 00:08:35.431 "is_configured": true, 00:08:35.431 "data_offset": 0, 00:08:35.431 "data_size": 65536 00:08:35.431 } 00:08:35.431 ] 00:08:35.431 }' 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.431 08:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.001 [2024-12-13 08:19:48.121063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.001 [2024-12-13 08:19:48.121179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.001 [2024-12-13 08:19:48.214393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.001 [2024-12-13 08:19:48.214535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.001 [2024-12-13 08:19:48.214577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62849 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62849 ']' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62849 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62849 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62849' 00:08:36.001 killing process with pid 62849 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62849 00:08:36.001 [2024-12-13 08:19:48.314269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.001 08:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62849 00:08:36.001 [2024-12-13 08:19:48.330728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.383 00:08:37.383 real 0m5.167s 00:08:37.383 user 0m7.495s 00:08:37.383 sys 0m0.833s 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.383 ************************************ 00:08:37.383 END TEST raid_state_function_test 00:08:37.383 ************************************ 00:08:37.383 08:19:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:37.383 08:19:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.383 08:19:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.383 08:19:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.383 ************************************ 00:08:37.383 START TEST raid_state_function_test_sb 00:08:37.383 ************************************ 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63102 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63102' 00:08:37.383 Process raid pid: 63102 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63102 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63102 ']' 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.383 08:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.383 [2024-12-13 08:19:49.640711] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:37.383 [2024-12-13 08:19:49.640878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.652 [2024-12-13 08:19:49.822676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.652 [2024-12-13 08:19:49.941875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.911 [2024-12-13 08:19:50.157133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.911 [2024-12-13 08:19:50.157178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.169 [2024-12-13 08:19:50.526826] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.169 [2024-12-13 08:19:50.526927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.169 [2024-12-13 08:19:50.526966] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.169 [2024-12-13 08:19:50.526992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.169 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.429 "name": "Existed_Raid", 00:08:38.429 "uuid": "0e313eba-d11c-4928-b332-c5c4d41f74d4", 00:08:38.429 "strip_size_kb": 0, 00:08:38.429 "state": "configuring", 00:08:38.429 "raid_level": "raid1", 00:08:38.429 "superblock": true, 00:08:38.429 "num_base_bdevs": 2, 00:08:38.429 "num_base_bdevs_discovered": 0, 00:08:38.429 "num_base_bdevs_operational": 2, 00:08:38.429 "base_bdevs_list": [ 00:08:38.429 { 00:08:38.429 "name": "BaseBdev1", 00:08:38.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.429 "is_configured": false, 00:08:38.429 "data_offset": 0, 00:08:38.429 "data_size": 0 00:08:38.429 }, 00:08:38.429 { 00:08:38.429 "name": "BaseBdev2", 00:08:38.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.429 "is_configured": false, 00:08:38.429 "data_offset": 0, 00:08:38.429 "data_size": 0 00:08:38.429 } 00:08:38.429 ] 00:08:38.429 }' 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.429 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 [2024-12-13 08:19:50.962024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.688 [2024-12-13 08:19:50.962111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 [2024-12-13 08:19:50.969978] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.688 [2024-12-13 08:19:50.970055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.688 [2024-12-13 08:19:50.970084] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.688 [2024-12-13 08:19:50.970124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.688 08:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 [2024-12-13 08:19:51.015425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.688 BaseBdev1 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.688 [ 00:08:38.688 { 00:08:38.688 "name": "BaseBdev1", 00:08:38.688 "aliases": [ 00:08:38.688 "59ebf096-fc7a-4fd1-b5c7-3a314d9adc52" 00:08:38.688 ], 00:08:38.688 "product_name": "Malloc disk", 00:08:38.688 "block_size": 512, 00:08:38.688 "num_blocks": 65536, 00:08:38.688 "uuid": "59ebf096-fc7a-4fd1-b5c7-3a314d9adc52", 00:08:38.688 "assigned_rate_limits": { 00:08:38.688 "rw_ios_per_sec": 0, 00:08:38.688 "rw_mbytes_per_sec": 0, 00:08:38.688 "r_mbytes_per_sec": 0, 00:08:38.688 "w_mbytes_per_sec": 0 00:08:38.688 }, 00:08:38.688 "claimed": true, 00:08:38.688 "claim_type": "exclusive_write", 00:08:38.688 "zoned": false, 00:08:38.688 "supported_io_types": { 00:08:38.688 "read": true, 00:08:38.688 "write": true, 00:08:38.688 "unmap": true, 00:08:38.688 "flush": true, 00:08:38.688 "reset": true, 00:08:38.688 "nvme_admin": false, 00:08:38.688 "nvme_io": false, 00:08:38.688 "nvme_io_md": false, 00:08:38.688 "write_zeroes": true, 00:08:38.688 "zcopy": true, 00:08:38.688 "get_zone_info": false, 00:08:38.688 "zone_management": false, 00:08:38.688 "zone_append": false, 00:08:38.688 "compare": false, 00:08:38.688 "compare_and_write": false, 00:08:38.688 "abort": true, 00:08:38.688 "seek_hole": false, 00:08:38.688 "seek_data": false, 00:08:38.688 "copy": true, 00:08:38.688 "nvme_iov_md": false 00:08:38.688 }, 00:08:38.688 "memory_domains": [ 00:08:38.688 { 00:08:38.688 "dma_device_id": "system", 00:08:38.688 "dma_device_type": 1 00:08:38.688 }, 00:08:38.688 { 00:08:38.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.688 "dma_device_type": 2 00:08:38.688 } 00:08:38.688 ], 00:08:38.688 "driver_specific": {} 00:08:38.688 } 00:08:38.688 ] 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.688 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.947 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.947 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.947 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.947 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.947 "name": "Existed_Raid", 00:08:38.947 "uuid": "cfefdbb6-dcc5-40b6-94f4-0a214fc8edd0", 00:08:38.947 "strip_size_kb": 0, 00:08:38.947 "state": "configuring", 00:08:38.947 "raid_level": "raid1", 00:08:38.947 "superblock": true, 00:08:38.947 "num_base_bdevs": 2, 00:08:38.947 "num_base_bdevs_discovered": 1, 00:08:38.947 "num_base_bdevs_operational": 2, 00:08:38.947 "base_bdevs_list": [ 00:08:38.947 { 00:08:38.947 "name": "BaseBdev1", 00:08:38.947 "uuid": "59ebf096-fc7a-4fd1-b5c7-3a314d9adc52", 00:08:38.947 "is_configured": true, 00:08:38.947 "data_offset": 2048, 00:08:38.947 "data_size": 63488 00:08:38.947 }, 00:08:38.947 { 00:08:38.947 "name": "BaseBdev2", 00:08:38.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.947 "is_configured": false, 00:08:38.947 "data_offset": 0, 00:08:38.947 "data_size": 0 00:08:38.947 } 00:08:38.947 ] 00:08:38.947 }' 00:08:38.947 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.947 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.205 [2024-12-13 08:19:51.498703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.205 [2024-12-13 08:19:51.498823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.205 [2024-12-13 08:19:51.510727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.205 [2024-12-13 08:19:51.512783] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.205 [2024-12-13 08:19:51.512867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.205 "name": "Existed_Raid", 00:08:39.205 "uuid": "4570a2f6-abc9-484c-89b8-6ec793ed133c", 00:08:39.205 "strip_size_kb": 0, 00:08:39.205 "state": "configuring", 00:08:39.205 "raid_level": "raid1", 00:08:39.205 "superblock": true, 00:08:39.205 "num_base_bdevs": 2, 00:08:39.205 "num_base_bdevs_discovered": 1, 00:08:39.205 "num_base_bdevs_operational": 2, 00:08:39.205 "base_bdevs_list": [ 00:08:39.205 { 00:08:39.205 "name": "BaseBdev1", 00:08:39.205 "uuid": "59ebf096-fc7a-4fd1-b5c7-3a314d9adc52", 00:08:39.205 "is_configured": true, 00:08:39.205 "data_offset": 2048, 00:08:39.205 "data_size": 63488 00:08:39.205 }, 00:08:39.205 { 00:08:39.205 "name": "BaseBdev2", 00:08:39.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.205 "is_configured": false, 00:08:39.205 "data_offset": 0, 00:08:39.205 "data_size": 0 00:08:39.205 } 00:08:39.205 ] 00:08:39.205 }' 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.205 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.772 [2024-12-13 08:19:51.973076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.772 [2024-12-13 08:19:51.973494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:39.772 [2024-12-13 08:19:51.973553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:39.772 [2024-12-13 08:19:51.973856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:39.772 BaseBdev2 00:08:39.772 [2024-12-13 08:19:51.974089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:39.772 [2024-12-13 08:19:51.974167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:39.772 [2024-12-13 08:19:51.974441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.772 08:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.772 [ 00:08:39.772 { 00:08:39.772 "name": "BaseBdev2", 00:08:39.772 "aliases": [ 00:08:39.772 "10780aaa-0eef-4a6c-97cd-6db2dcfe9e9c" 00:08:39.772 ], 00:08:39.772 "product_name": "Malloc disk", 00:08:39.772 "block_size": 512, 00:08:39.772 "num_blocks": 65536, 00:08:39.772 "uuid": "10780aaa-0eef-4a6c-97cd-6db2dcfe9e9c", 00:08:39.772 "assigned_rate_limits": { 00:08:39.772 "rw_ios_per_sec": 0, 00:08:39.772 "rw_mbytes_per_sec": 0, 00:08:39.772 "r_mbytes_per_sec": 0, 00:08:39.772 "w_mbytes_per_sec": 0 00:08:39.772 }, 00:08:39.772 "claimed": true, 00:08:39.772 "claim_type": "exclusive_write", 00:08:39.772 "zoned": false, 00:08:39.772 "supported_io_types": { 00:08:39.772 "read": true, 00:08:39.772 "write": true, 00:08:39.772 "unmap": true, 00:08:39.772 "flush": true, 00:08:39.772 "reset": true, 00:08:39.772 "nvme_admin": false, 00:08:39.772 "nvme_io": false, 00:08:39.772 "nvme_io_md": false, 00:08:39.772 "write_zeroes": true, 00:08:39.772 "zcopy": true, 00:08:39.772 "get_zone_info": false, 00:08:39.772 "zone_management": false, 00:08:39.772 "zone_append": false, 00:08:39.772 "compare": false, 00:08:39.772 "compare_and_write": false, 00:08:39.772 "abort": true, 00:08:39.772 "seek_hole": false, 00:08:39.772 "seek_data": false, 00:08:39.772 "copy": true, 00:08:39.772 "nvme_iov_md": false 00:08:39.772 }, 00:08:39.772 "memory_domains": [ 00:08:39.772 { 00:08:39.772 "dma_device_id": "system", 00:08:39.772 "dma_device_type": 1 00:08:39.772 }, 00:08:39.772 { 00:08:39.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.772 "dma_device_type": 2 00:08:39.772 } 00:08:39.772 ], 00:08:39.772 "driver_specific": {} 00:08:39.772 } 00:08:39.772 ] 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.772 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.772 "name": "Existed_Raid", 00:08:39.772 "uuid": "4570a2f6-abc9-484c-89b8-6ec793ed133c", 00:08:39.772 "strip_size_kb": 0, 00:08:39.772 "state": "online", 00:08:39.772 "raid_level": "raid1", 00:08:39.772 "superblock": true, 00:08:39.772 "num_base_bdevs": 2, 00:08:39.772 "num_base_bdevs_discovered": 2, 00:08:39.772 "num_base_bdevs_operational": 2, 00:08:39.772 "base_bdevs_list": [ 00:08:39.772 { 00:08:39.772 "name": "BaseBdev1", 00:08:39.772 "uuid": "59ebf096-fc7a-4fd1-b5c7-3a314d9adc52", 00:08:39.773 "is_configured": true, 00:08:39.773 "data_offset": 2048, 00:08:39.773 "data_size": 63488 00:08:39.773 }, 00:08:39.773 { 00:08:39.773 "name": "BaseBdev2", 00:08:39.773 "uuid": "10780aaa-0eef-4a6c-97cd-6db2dcfe9e9c", 00:08:39.773 "is_configured": true, 00:08:39.773 "data_offset": 2048, 00:08:39.773 "data_size": 63488 00:08:39.773 } 00:08:39.773 ] 00:08:39.773 }' 00:08:39.773 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.773 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.339 [2024-12-13 08:19:52.492569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.339 "name": "Existed_Raid", 00:08:40.339 "aliases": [ 00:08:40.339 "4570a2f6-abc9-484c-89b8-6ec793ed133c" 00:08:40.339 ], 00:08:40.339 "product_name": "Raid Volume", 00:08:40.339 "block_size": 512, 00:08:40.339 "num_blocks": 63488, 00:08:40.339 "uuid": "4570a2f6-abc9-484c-89b8-6ec793ed133c", 00:08:40.339 "assigned_rate_limits": { 00:08:40.339 "rw_ios_per_sec": 0, 00:08:40.339 "rw_mbytes_per_sec": 0, 00:08:40.339 "r_mbytes_per_sec": 0, 00:08:40.339 "w_mbytes_per_sec": 0 00:08:40.339 }, 00:08:40.339 "claimed": false, 00:08:40.339 "zoned": false, 00:08:40.339 "supported_io_types": { 00:08:40.339 "read": true, 00:08:40.339 "write": true, 00:08:40.339 "unmap": false, 00:08:40.339 "flush": false, 00:08:40.339 "reset": true, 00:08:40.339 "nvme_admin": false, 00:08:40.339 "nvme_io": false, 00:08:40.339 "nvme_io_md": false, 00:08:40.339 "write_zeroes": true, 00:08:40.339 "zcopy": false, 00:08:40.339 "get_zone_info": false, 00:08:40.339 "zone_management": false, 00:08:40.339 "zone_append": false, 00:08:40.339 "compare": false, 00:08:40.339 "compare_and_write": false, 00:08:40.339 "abort": false, 00:08:40.339 "seek_hole": false, 00:08:40.339 "seek_data": false, 00:08:40.339 "copy": false, 00:08:40.339 "nvme_iov_md": false 00:08:40.339 }, 00:08:40.339 "memory_domains": [ 00:08:40.339 { 00:08:40.339 "dma_device_id": "system", 00:08:40.339 "dma_device_type": 1 00:08:40.339 }, 00:08:40.339 { 00:08:40.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.339 "dma_device_type": 2 00:08:40.339 }, 00:08:40.339 { 00:08:40.339 "dma_device_id": "system", 00:08:40.339 "dma_device_type": 1 00:08:40.339 }, 00:08:40.339 { 00:08:40.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.339 "dma_device_type": 2 00:08:40.339 } 00:08:40.339 ], 00:08:40.339 "driver_specific": { 00:08:40.339 "raid": { 00:08:40.339 "uuid": "4570a2f6-abc9-484c-89b8-6ec793ed133c", 00:08:40.339 "strip_size_kb": 0, 00:08:40.339 "state": "online", 00:08:40.339 "raid_level": "raid1", 00:08:40.339 "superblock": true, 00:08:40.339 "num_base_bdevs": 2, 00:08:40.339 "num_base_bdevs_discovered": 2, 00:08:40.339 "num_base_bdevs_operational": 2, 00:08:40.339 "base_bdevs_list": [ 00:08:40.339 { 00:08:40.339 "name": "BaseBdev1", 00:08:40.339 "uuid": "59ebf096-fc7a-4fd1-b5c7-3a314d9adc52", 00:08:40.339 "is_configured": true, 00:08:40.339 "data_offset": 2048, 00:08:40.339 "data_size": 63488 00:08:40.339 }, 00:08:40.339 { 00:08:40.339 "name": "BaseBdev2", 00:08:40.339 "uuid": "10780aaa-0eef-4a6c-97cd-6db2dcfe9e9c", 00:08:40.339 "is_configured": true, 00:08:40.339 "data_offset": 2048, 00:08:40.339 "data_size": 63488 00:08:40.339 } 00:08:40.339 ] 00:08:40.339 } 00:08:40.339 } 00:08:40.339 }' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.339 BaseBdev2' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.339 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.598 [2024-12-13 08:19:52.719901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.598 "name": "Existed_Raid", 00:08:40.598 "uuid": "4570a2f6-abc9-484c-89b8-6ec793ed133c", 00:08:40.598 "strip_size_kb": 0, 00:08:40.598 "state": "online", 00:08:40.598 "raid_level": "raid1", 00:08:40.598 "superblock": true, 00:08:40.598 "num_base_bdevs": 2, 00:08:40.598 "num_base_bdevs_discovered": 1, 00:08:40.598 "num_base_bdevs_operational": 1, 00:08:40.598 "base_bdevs_list": [ 00:08:40.598 { 00:08:40.598 "name": null, 00:08:40.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.598 "is_configured": false, 00:08:40.598 "data_offset": 0, 00:08:40.598 "data_size": 63488 00:08:40.598 }, 00:08:40.598 { 00:08:40.598 "name": "BaseBdev2", 00:08:40.598 "uuid": "10780aaa-0eef-4a6c-97cd-6db2dcfe9e9c", 00:08:40.598 "is_configured": true, 00:08:40.598 "data_offset": 2048, 00:08:40.598 "data_size": 63488 00:08:40.598 } 00:08:40.598 ] 00:08:40.598 }' 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.598 08:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.166 [2024-12-13 08:19:53.338130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.166 [2024-12-13 08:19:53.338279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.166 [2024-12-13 08:19:53.437978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.166 [2024-12-13 08:19:53.438096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.166 [2024-12-13 08:19:53.438161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63102 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63102 ']' 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63102 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.166 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63102 00:08:41.425 killing process with pid 63102 00:08:41.425 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.425 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.426 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63102' 00:08:41.426 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63102 00:08:41.426 [2024-12-13 08:19:53.534743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.426 08:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63102 00:08:41.426 [2024-12-13 08:19:53.551660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.361 08:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.361 00:08:42.361 real 0m5.174s 00:08:42.361 user 0m7.483s 00:08:42.361 sys 0m0.841s 00:08:42.361 08:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.361 08:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.361 ************************************ 00:08:42.361 END TEST raid_state_function_test_sb 00:08:42.361 ************************************ 00:08:42.620 08:19:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:42.620 08:19:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.620 08:19:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.620 08:19:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.621 ************************************ 00:08:42.621 START TEST raid_superblock_test 00:08:42.621 ************************************ 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63354 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63354 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63354 ']' 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.621 08:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.621 [2024-12-13 08:19:54.852941] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:42.621 [2024-12-13 08:19:54.853170] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63354 ] 00:08:42.881 [2024-12-13 08:19:55.025777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.881 [2024-12-13 08:19:55.142340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.145 [2024-12-13 08:19:55.342023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.145 [2024-12-13 08:19:55.342155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.404 malloc1 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.404 [2024-12-13 08:19:55.748255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.404 [2024-12-13 08:19:55.748424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.404 [2024-12-13 08:19:55.748465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:43.404 [2024-12-13 08:19:55.748495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.404 [2024-12-13 08:19:55.750806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.404 [2024-12-13 08:19:55.750888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.404 pt1 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.404 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 malloc2 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 [2024-12-13 08:19:55.807248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.663 [2024-12-13 08:19:55.807388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.663 [2024-12-13 08:19:55.807433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:43.663 [2024-12-13 08:19:55.807462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.663 [2024-12-13 08:19:55.809614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.663 [2024-12-13 08:19:55.809683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.663 pt2 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 [2024-12-13 08:19:55.819266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.663 [2024-12-13 08:19:55.821231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.663 [2024-12-13 08:19:55.821441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:43.663 [2024-12-13 08:19:55.821491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.663 [2024-12-13 08:19:55.821806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.663 [2024-12-13 08:19:55.822023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:43.663 [2024-12-13 08:19:55.822073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:43.663 [2024-12-13 08:19:55.822349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.663 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.663 "name": "raid_bdev1", 00:08:43.663 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:43.663 "strip_size_kb": 0, 00:08:43.663 "state": "online", 00:08:43.663 "raid_level": "raid1", 00:08:43.663 "superblock": true, 00:08:43.663 "num_base_bdevs": 2, 00:08:43.663 "num_base_bdevs_discovered": 2, 00:08:43.663 "num_base_bdevs_operational": 2, 00:08:43.663 "base_bdevs_list": [ 00:08:43.663 { 00:08:43.663 "name": "pt1", 00:08:43.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.663 "is_configured": true, 00:08:43.663 "data_offset": 2048, 00:08:43.663 "data_size": 63488 00:08:43.663 }, 00:08:43.664 { 00:08:43.664 "name": "pt2", 00:08:43.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.664 "is_configured": true, 00:08:43.664 "data_offset": 2048, 00:08:43.664 "data_size": 63488 00:08:43.664 } 00:08:43.664 ] 00:08:43.664 }' 00:08:43.664 08:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.664 08:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.231 [2024-12-13 08:19:56.326745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.231 "name": "raid_bdev1", 00:08:44.231 "aliases": [ 00:08:44.231 "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9" 00:08:44.231 ], 00:08:44.231 "product_name": "Raid Volume", 00:08:44.231 "block_size": 512, 00:08:44.231 "num_blocks": 63488, 00:08:44.231 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:44.231 "assigned_rate_limits": { 00:08:44.231 "rw_ios_per_sec": 0, 00:08:44.231 "rw_mbytes_per_sec": 0, 00:08:44.231 "r_mbytes_per_sec": 0, 00:08:44.231 "w_mbytes_per_sec": 0 00:08:44.231 }, 00:08:44.231 "claimed": false, 00:08:44.231 "zoned": false, 00:08:44.231 "supported_io_types": { 00:08:44.231 "read": true, 00:08:44.231 "write": true, 00:08:44.231 "unmap": false, 00:08:44.231 "flush": false, 00:08:44.231 "reset": true, 00:08:44.231 "nvme_admin": false, 00:08:44.231 "nvme_io": false, 00:08:44.231 "nvme_io_md": false, 00:08:44.231 "write_zeroes": true, 00:08:44.231 "zcopy": false, 00:08:44.231 "get_zone_info": false, 00:08:44.231 "zone_management": false, 00:08:44.231 "zone_append": false, 00:08:44.231 "compare": false, 00:08:44.231 "compare_and_write": false, 00:08:44.231 "abort": false, 00:08:44.231 "seek_hole": false, 00:08:44.231 "seek_data": false, 00:08:44.231 "copy": false, 00:08:44.231 "nvme_iov_md": false 00:08:44.231 }, 00:08:44.231 "memory_domains": [ 00:08:44.231 { 00:08:44.231 "dma_device_id": "system", 00:08:44.231 "dma_device_type": 1 00:08:44.231 }, 00:08:44.231 { 00:08:44.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.231 "dma_device_type": 2 00:08:44.231 }, 00:08:44.231 { 00:08:44.231 "dma_device_id": "system", 00:08:44.231 "dma_device_type": 1 00:08:44.231 }, 00:08:44.231 { 00:08:44.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.231 "dma_device_type": 2 00:08:44.231 } 00:08:44.231 ], 00:08:44.231 "driver_specific": { 00:08:44.231 "raid": { 00:08:44.231 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:44.231 "strip_size_kb": 0, 00:08:44.231 "state": "online", 00:08:44.231 "raid_level": "raid1", 00:08:44.231 "superblock": true, 00:08:44.231 "num_base_bdevs": 2, 00:08:44.231 "num_base_bdevs_discovered": 2, 00:08:44.231 "num_base_bdevs_operational": 2, 00:08:44.231 "base_bdevs_list": [ 00:08:44.231 { 00:08:44.231 "name": "pt1", 00:08:44.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.231 "is_configured": true, 00:08:44.231 "data_offset": 2048, 00:08:44.231 "data_size": 63488 00:08:44.231 }, 00:08:44.231 { 00:08:44.231 "name": "pt2", 00:08:44.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.231 "is_configured": true, 00:08:44.231 "data_offset": 2048, 00:08:44.231 "data_size": 63488 00:08:44.231 } 00:08:44.231 ] 00:08:44.231 } 00:08:44.231 } 00:08:44.231 }' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.231 pt2' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.231 [2024-12-13 08:19:56.550456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a3f455d7-a477-4dbf-a9a2-923bf0fc47d9 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a3f455d7-a477-4dbf-a9a2-923bf0fc47d9 ']' 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.231 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.231 [2024-12-13 08:19:56.594005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.231 [2024-12-13 08:19:56.594129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.231 [2024-12-13 08:19:56.594273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.231 [2024-12-13 08:19:56.594381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.231 [2024-12-13 08:19:56.594437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.492 [2024-12-13 08:19:56.717829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:44.492 [2024-12-13 08:19:56.719794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:44.492 [2024-12-13 08:19:56.719918] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:44.492 [2024-12-13 08:19:56.720025] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:44.492 [2024-12-13 08:19:56.720108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.492 [2024-12-13 08:19:56.720174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:44.492 request: 00:08:44.492 { 00:08:44.492 "name": "raid_bdev1", 00:08:44.492 "raid_level": "raid1", 00:08:44.492 "base_bdevs": [ 00:08:44.492 "malloc1", 00:08:44.492 "malloc2" 00:08:44.492 ], 00:08:44.492 "superblock": false, 00:08:44.492 "method": "bdev_raid_create", 00:08:44.492 "req_id": 1 00:08:44.492 } 00:08:44.492 Got JSON-RPC error response 00:08:44.492 response: 00:08:44.492 { 00:08:44.492 "code": -17, 00:08:44.492 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:44.492 } 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.492 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.492 [2024-12-13 08:19:56.785690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.492 [2024-12-13 08:19:56.785828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.492 [2024-12-13 08:19:56.785865] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:44.492 [2024-12-13 08:19:56.785903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.492 [2024-12-13 08:19:56.788338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.492 [2024-12-13 08:19:56.788430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.492 [2024-12-13 08:19:56.788546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:44.493 [2024-12-13 08:19:56.788631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.493 pt1 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.493 "name": "raid_bdev1", 00:08:44.493 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:44.493 "strip_size_kb": 0, 00:08:44.493 "state": "configuring", 00:08:44.493 "raid_level": "raid1", 00:08:44.493 "superblock": true, 00:08:44.493 "num_base_bdevs": 2, 00:08:44.493 "num_base_bdevs_discovered": 1, 00:08:44.493 "num_base_bdevs_operational": 2, 00:08:44.493 "base_bdevs_list": [ 00:08:44.493 { 00:08:44.493 "name": "pt1", 00:08:44.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.493 "is_configured": true, 00:08:44.493 "data_offset": 2048, 00:08:44.493 "data_size": 63488 00:08:44.493 }, 00:08:44.493 { 00:08:44.493 "name": null, 00:08:44.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.493 "is_configured": false, 00:08:44.493 "data_offset": 2048, 00:08:44.493 "data_size": 63488 00:08:44.493 } 00:08:44.493 ] 00:08:44.493 }' 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.493 08:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.061 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:45.061 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:45.061 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.061 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.061 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.061 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.061 [2024-12-13 08:19:57.201011] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.061 [2024-12-13 08:19:57.201111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.061 [2024-12-13 08:19:57.201134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:45.061 [2024-12-13 08:19:57.201145] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.061 [2024-12-13 08:19:57.201633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.062 [2024-12-13 08:19:57.201668] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.062 [2024-12-13 08:19:57.201754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.062 [2024-12-13 08:19:57.201784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.062 [2024-12-13 08:19:57.201944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:45.062 [2024-12-13 08:19:57.201957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.062 [2024-12-13 08:19:57.202238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:45.062 [2024-12-13 08:19:57.202408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:45.062 [2024-12-13 08:19:57.202418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:45.062 [2024-12-13 08:19:57.202581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.062 pt2 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.062 "name": "raid_bdev1", 00:08:45.062 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:45.062 "strip_size_kb": 0, 00:08:45.062 "state": "online", 00:08:45.062 "raid_level": "raid1", 00:08:45.062 "superblock": true, 00:08:45.062 "num_base_bdevs": 2, 00:08:45.062 "num_base_bdevs_discovered": 2, 00:08:45.062 "num_base_bdevs_operational": 2, 00:08:45.062 "base_bdevs_list": [ 00:08:45.062 { 00:08:45.062 "name": "pt1", 00:08:45.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.062 "is_configured": true, 00:08:45.062 "data_offset": 2048, 00:08:45.062 "data_size": 63488 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "name": "pt2", 00:08:45.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.062 "is_configured": true, 00:08:45.062 "data_offset": 2048, 00:08:45.062 "data_size": 63488 00:08:45.062 } 00:08:45.062 ] 00:08:45.062 }' 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.062 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.321 [2024-12-13 08:19:57.648481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.321 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.321 "name": "raid_bdev1", 00:08:45.321 "aliases": [ 00:08:45.321 "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9" 00:08:45.321 ], 00:08:45.321 "product_name": "Raid Volume", 00:08:45.321 "block_size": 512, 00:08:45.321 "num_blocks": 63488, 00:08:45.321 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:45.321 "assigned_rate_limits": { 00:08:45.321 "rw_ios_per_sec": 0, 00:08:45.321 "rw_mbytes_per_sec": 0, 00:08:45.321 "r_mbytes_per_sec": 0, 00:08:45.321 "w_mbytes_per_sec": 0 00:08:45.321 }, 00:08:45.321 "claimed": false, 00:08:45.321 "zoned": false, 00:08:45.321 "supported_io_types": { 00:08:45.321 "read": true, 00:08:45.321 "write": true, 00:08:45.321 "unmap": false, 00:08:45.321 "flush": false, 00:08:45.321 "reset": true, 00:08:45.321 "nvme_admin": false, 00:08:45.321 "nvme_io": false, 00:08:45.321 "nvme_io_md": false, 00:08:45.321 "write_zeroes": true, 00:08:45.321 "zcopy": false, 00:08:45.321 "get_zone_info": false, 00:08:45.321 "zone_management": false, 00:08:45.321 "zone_append": false, 00:08:45.321 "compare": false, 00:08:45.321 "compare_and_write": false, 00:08:45.321 "abort": false, 00:08:45.321 "seek_hole": false, 00:08:45.321 "seek_data": false, 00:08:45.321 "copy": false, 00:08:45.321 "nvme_iov_md": false 00:08:45.321 }, 00:08:45.321 "memory_domains": [ 00:08:45.321 { 00:08:45.321 "dma_device_id": "system", 00:08:45.321 "dma_device_type": 1 00:08:45.321 }, 00:08:45.321 { 00:08:45.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.321 "dma_device_type": 2 00:08:45.321 }, 00:08:45.321 { 00:08:45.321 "dma_device_id": "system", 00:08:45.321 "dma_device_type": 1 00:08:45.321 }, 00:08:45.321 { 00:08:45.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.321 "dma_device_type": 2 00:08:45.321 } 00:08:45.321 ], 00:08:45.321 "driver_specific": { 00:08:45.321 "raid": { 00:08:45.321 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:45.321 "strip_size_kb": 0, 00:08:45.321 "state": "online", 00:08:45.321 "raid_level": "raid1", 00:08:45.321 "superblock": true, 00:08:45.322 "num_base_bdevs": 2, 00:08:45.322 "num_base_bdevs_discovered": 2, 00:08:45.322 "num_base_bdevs_operational": 2, 00:08:45.322 "base_bdevs_list": [ 00:08:45.322 { 00:08:45.322 "name": "pt1", 00:08:45.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.322 "is_configured": true, 00:08:45.322 "data_offset": 2048, 00:08:45.322 "data_size": 63488 00:08:45.322 }, 00:08:45.322 { 00:08:45.322 "name": "pt2", 00:08:45.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.322 "is_configured": true, 00:08:45.322 "data_offset": 2048, 00:08:45.322 "data_size": 63488 00:08:45.322 } 00:08:45.322 ] 00:08:45.322 } 00:08:45.322 } 00:08:45.322 }' 00:08:45.322 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.580 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:45.581 pt2' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:45.581 [2024-12-13 08:19:57.828238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a3f455d7-a477-4dbf-a9a2-923bf0fc47d9 '!=' a3f455d7-a477-4dbf-a9a2-923bf0fc47d9 ']' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.581 [2024-12-13 08:19:57.879892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.581 "name": "raid_bdev1", 00:08:45.581 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:45.581 "strip_size_kb": 0, 00:08:45.581 "state": "online", 00:08:45.581 "raid_level": "raid1", 00:08:45.581 "superblock": true, 00:08:45.581 "num_base_bdevs": 2, 00:08:45.581 "num_base_bdevs_discovered": 1, 00:08:45.581 "num_base_bdevs_operational": 1, 00:08:45.581 "base_bdevs_list": [ 00:08:45.581 { 00:08:45.581 "name": null, 00:08:45.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.581 "is_configured": false, 00:08:45.581 "data_offset": 0, 00:08:45.581 "data_size": 63488 00:08:45.581 }, 00:08:45.581 { 00:08:45.581 "name": "pt2", 00:08:45.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.581 "is_configured": true, 00:08:45.581 "data_offset": 2048, 00:08:45.581 "data_size": 63488 00:08:45.581 } 00:08:45.581 ] 00:08:45.581 }' 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.581 08:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.149 [2024-12-13 08:19:58.375013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.149 [2024-12-13 08:19:58.375126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.149 [2024-12-13 08:19:58.375237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.149 [2024-12-13 08:19:58.375310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.149 [2024-12-13 08:19:58.375360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:46.149 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.150 [2024-12-13 08:19:58.450845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.150 [2024-12-13 08:19:58.450985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.150 [2024-12-13 08:19:58.451024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:46.150 [2024-12-13 08:19:58.451065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.150 [2024-12-13 08:19:58.453512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.150 [2024-12-13 08:19:58.453592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.150 [2024-12-13 08:19:58.453710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:46.150 [2024-12-13 08:19:58.453786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.150 [2024-12-13 08:19:58.453995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:46.150 [2024-12-13 08:19:58.454043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.150 [2024-12-13 08:19:58.454380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:46.150 [2024-12-13 08:19:58.454591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:46.150 [2024-12-13 08:19:58.454655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:46.150 [2024-12-13 08:19:58.454918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.150 pt2 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.150 "name": "raid_bdev1", 00:08:46.150 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:46.150 "strip_size_kb": 0, 00:08:46.150 "state": "online", 00:08:46.150 "raid_level": "raid1", 00:08:46.150 "superblock": true, 00:08:46.150 "num_base_bdevs": 2, 00:08:46.150 "num_base_bdevs_discovered": 1, 00:08:46.150 "num_base_bdevs_operational": 1, 00:08:46.150 "base_bdevs_list": [ 00:08:46.150 { 00:08:46.150 "name": null, 00:08:46.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.150 "is_configured": false, 00:08:46.150 "data_offset": 2048, 00:08:46.150 "data_size": 63488 00:08:46.150 }, 00:08:46.150 { 00:08:46.150 "name": "pt2", 00:08:46.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.150 "is_configured": true, 00:08:46.150 "data_offset": 2048, 00:08:46.150 "data_size": 63488 00:08:46.150 } 00:08:46.150 ] 00:08:46.150 }' 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.150 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.735 [2024-12-13 08:19:58.882116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.735 [2024-12-13 08:19:58.882190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.735 [2024-12-13 08:19:58.882293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.735 [2024-12-13 08:19:58.882376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.735 [2024-12-13 08:19:58.882410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.735 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.735 [2024-12-13 08:19:58.942029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.735 [2024-12-13 08:19:58.942143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.735 [2024-12-13 08:19:58.942190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:46.735 [2024-12-13 08:19:58.942242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.735 [2024-12-13 08:19:58.944466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.735 [2024-12-13 08:19:58.944537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.735 [2024-12-13 08:19:58.944673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:46.735 [2024-12-13 08:19:58.944741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.735 [2024-12-13 08:19:58.944903] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:46.735 [2024-12-13 08:19:58.944958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.735 [2024-12-13 08:19:58.944995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:46.735 [2024-12-13 08:19:58.945082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.735 [2024-12-13 08:19:58.945207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:46.735 [2024-12-13 08:19:58.945244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.735 [2024-12-13 08:19:58.945506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:46.735 [2024-12-13 08:19:58.945685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:46.736 [2024-12-13 08:19:58.945731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:46.736 [2024-12-13 08:19:58.945955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.736 pt1 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.736 "name": "raid_bdev1", 00:08:46.736 "uuid": "a3f455d7-a477-4dbf-a9a2-923bf0fc47d9", 00:08:46.736 "strip_size_kb": 0, 00:08:46.736 "state": "online", 00:08:46.736 "raid_level": "raid1", 00:08:46.736 "superblock": true, 00:08:46.736 "num_base_bdevs": 2, 00:08:46.736 "num_base_bdevs_discovered": 1, 00:08:46.736 "num_base_bdevs_operational": 1, 00:08:46.736 "base_bdevs_list": [ 00:08:46.736 { 00:08:46.736 "name": null, 00:08:46.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.736 "is_configured": false, 00:08:46.736 "data_offset": 2048, 00:08:46.736 "data_size": 63488 00:08:46.736 }, 00:08:46.736 { 00:08:46.736 "name": "pt2", 00:08:46.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.736 "is_configured": true, 00:08:46.736 "data_offset": 2048, 00:08:46.736 "data_size": 63488 00:08:46.736 } 00:08:46.736 ] 00:08:46.736 }' 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.736 08:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:47.305 [2024-12-13 08:19:59.461397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a3f455d7-a477-4dbf-a9a2-923bf0fc47d9 '!=' a3f455d7-a477-4dbf-a9a2-923bf0fc47d9 ']' 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63354 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63354 ']' 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63354 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63354 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63354' 00:08:47.305 killing process with pid 63354 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63354 00:08:47.305 [2024-12-13 08:19:59.547075] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.305 [2024-12-13 08:19:59.547236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.305 08:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63354 00:08:47.305 [2024-12-13 08:19:59.547323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.305 [2024-12-13 08:19:59.547341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:47.564 [2024-12-13 08:19:59.762824] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.941 08:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:48.941 00:08:48.941 real 0m6.148s 00:08:48.941 user 0m9.247s 00:08:48.941 sys 0m1.104s 00:08:48.941 ************************************ 00:08:48.941 END TEST raid_superblock_test 00:08:48.941 ************************************ 00:08:48.941 08:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.941 08:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.941 08:20:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:48.941 08:20:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.941 08:20:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.941 08:20:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.941 ************************************ 00:08:48.941 START TEST raid_read_error_test 00:08:48.941 ************************************ 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mWKdKNo2qN 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63684 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63684 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63684 ']' 00:08:48.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.941 08:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.941 [2024-12-13 08:20:01.083408] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:48.941 [2024-12-13 08:20:01.083550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63684 ] 00:08:48.941 [2024-12-13 08:20:01.260111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.201 [2024-12-13 08:20:01.377014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.460 [2024-12-13 08:20:01.581501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.460 [2024-12-13 08:20:01.581570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.720 BaseBdev1_malloc 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.720 true 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.720 [2024-12-13 08:20:01.992943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.720 [2024-12-13 08:20:01.993054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.720 [2024-12-13 08:20:01.993113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.720 [2024-12-13 08:20:01.993158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.720 [2024-12-13 08:20:01.995349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.720 [2024-12-13 08:20:01.995438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.720 BaseBdev1 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.720 08:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.720 BaseBdev2_malloc 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.720 true 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.720 [2024-12-13 08:20:02.047080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.720 [2024-12-13 08:20:02.047184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.720 [2024-12-13 08:20:02.047235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.720 [2024-12-13 08:20:02.047270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.720 [2024-12-13 08:20:02.049324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.720 [2024-12-13 08:20:02.049394] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.720 BaseBdev2 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.720 [2024-12-13 08:20:02.055148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.720 [2024-12-13 08:20:02.057008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.720 [2024-12-13 08:20:02.057247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.720 [2024-12-13 08:20:02.057296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.720 [2024-12-13 08:20:02.057537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.720 [2024-12-13 08:20:02.057736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.720 [2024-12-13 08:20:02.057777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:49.720 [2024-12-13 08:20:02.057984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.720 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.721 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.980 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.980 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.980 "name": "raid_bdev1", 00:08:49.980 "uuid": "db38d170-2dd0-4bef-b454-632e2ff9a2ce", 00:08:49.980 "strip_size_kb": 0, 00:08:49.980 "state": "online", 00:08:49.980 "raid_level": "raid1", 00:08:49.980 "superblock": true, 00:08:49.980 "num_base_bdevs": 2, 00:08:49.980 "num_base_bdevs_discovered": 2, 00:08:49.980 "num_base_bdevs_operational": 2, 00:08:49.980 "base_bdevs_list": [ 00:08:49.980 { 00:08:49.980 "name": "BaseBdev1", 00:08:49.980 "uuid": "80017397-933d-5849-badd-48f849ea5d84", 00:08:49.980 "is_configured": true, 00:08:49.980 "data_offset": 2048, 00:08:49.980 "data_size": 63488 00:08:49.980 }, 00:08:49.980 { 00:08:49.980 "name": "BaseBdev2", 00:08:49.980 "uuid": "1f7ae586-c82e-52d1-a087-b3d6ef80a048", 00:08:49.980 "is_configured": true, 00:08:49.980 "data_offset": 2048, 00:08:49.980 "data_size": 63488 00:08:49.980 } 00:08:49.980 ] 00:08:49.980 }' 00:08:49.980 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.980 08:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.238 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.238 08:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.497 [2024-12-13 08:20:02.607543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:51.433 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:51.433 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.433 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.433 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.433 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:51.433 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:51.433 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.434 "name": "raid_bdev1", 00:08:51.434 "uuid": "db38d170-2dd0-4bef-b454-632e2ff9a2ce", 00:08:51.434 "strip_size_kb": 0, 00:08:51.434 "state": "online", 00:08:51.434 "raid_level": "raid1", 00:08:51.434 "superblock": true, 00:08:51.434 "num_base_bdevs": 2, 00:08:51.434 "num_base_bdevs_discovered": 2, 00:08:51.434 "num_base_bdevs_operational": 2, 00:08:51.434 "base_bdevs_list": [ 00:08:51.434 { 00:08:51.434 "name": "BaseBdev1", 00:08:51.434 "uuid": "80017397-933d-5849-badd-48f849ea5d84", 00:08:51.434 "is_configured": true, 00:08:51.434 "data_offset": 2048, 00:08:51.434 "data_size": 63488 00:08:51.434 }, 00:08:51.434 { 00:08:51.434 "name": "BaseBdev2", 00:08:51.434 "uuid": "1f7ae586-c82e-52d1-a087-b3d6ef80a048", 00:08:51.434 "is_configured": true, 00:08:51.434 "data_offset": 2048, 00:08:51.434 "data_size": 63488 00:08:51.434 } 00:08:51.434 ] 00:08:51.434 }' 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.434 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.693 08:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.693 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.693 08:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.693 [2024-12-13 08:20:04.004126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.693 [2024-12-13 08:20:04.004238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.693 [2024-12-13 08:20:04.007098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.693 [2024-12-13 08:20:04.007207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.693 [2024-12-13 08:20:04.007311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.693 [2024-12-13 08:20:04.007363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:51.693 { 00:08:51.693 "results": [ 00:08:51.693 { 00:08:51.693 "job": "raid_bdev1", 00:08:51.693 "core_mask": "0x1", 00:08:51.693 "workload": "randrw", 00:08:51.693 "percentage": 50, 00:08:51.693 "status": "finished", 00:08:51.693 "queue_depth": 1, 00:08:51.693 "io_size": 131072, 00:08:51.693 "runtime": 1.397659, 00:08:51.693 "iops": 16624.942135384954, 00:08:51.693 "mibps": 2078.1177669231192, 00:08:51.693 "io_failed": 0, 00:08:51.693 "io_timeout": 0, 00:08:51.693 "avg_latency_us": 57.280818125164906, 00:08:51.693 "min_latency_us": 24.370305676855896, 00:08:51.693 "max_latency_us": 1538.235807860262 00:08:51.693 } 00:08:51.693 ], 00:08:51.693 "core_count": 1 00:08:51.693 } 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63684 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63684 ']' 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63684 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63684 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.693 killing process with pid 63684 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63684' 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63684 00:08:51.693 [2024-12-13 08:20:04.055728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.693 08:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63684 00:08:51.952 [2024-12-13 08:20:04.193991] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mWKdKNo2qN 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:53.331 00:08:53.331 real 0m4.427s 00:08:53.331 user 0m5.351s 00:08:53.331 sys 0m0.523s 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.331 08:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.331 ************************************ 00:08:53.331 END TEST raid_read_error_test 00:08:53.331 ************************************ 00:08:53.331 08:20:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:53.331 08:20:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.331 08:20:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.331 08:20:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.331 ************************************ 00:08:53.331 START TEST raid_write_error_test 00:08:53.331 ************************************ 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2sLAWs4Wg7 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63824 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63824 00:08:53.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63824 ']' 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.331 08:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.331 [2024-12-13 08:20:05.576328] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:53.331 [2024-12-13 08:20:05.576548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63824 ] 00:08:53.590 [2024-12-13 08:20:05.746209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.590 [2024-12-13 08:20:05.868609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.848 [2024-12-13 08:20:06.077137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.848 [2024-12-13 08:20:06.077186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.108 BaseBdev1_malloc 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.108 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.108 true 00:08:54.109 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.109 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:54.109 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.109 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.109 [2024-12-13 08:20:06.470394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:54.109 [2024-12-13 08:20:06.470529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.109 [2024-12-13 08:20:06.470570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:54.109 [2024-12-13 08:20:06.470600] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.368 [2024-12-13 08:20:06.472819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.368 [2024-12-13 08:20:06.472906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:54.368 BaseBdev1 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.368 BaseBdev2_malloc 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.368 true 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.368 [2024-12-13 08:20:06.535821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:54.368 [2024-12-13 08:20:06.535943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.368 [2024-12-13 08:20:06.535982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:54.368 [2024-12-13 08:20:06.536012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.368 [2024-12-13 08:20:06.538258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.368 [2024-12-13 08:20:06.538331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:54.368 BaseBdev2 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.368 [2024-12-13 08:20:06.547856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.368 [2024-12-13 08:20:06.549734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.368 [2024-12-13 08:20:06.549981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.368 [2024-12-13 08:20:06.550034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.368 [2024-12-13 08:20:06.550348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:54.368 [2024-12-13 08:20:06.550586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.368 [2024-12-13 08:20:06.550631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:54.368 [2024-12-13 08:20:06.550845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.368 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.369 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.369 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.369 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.369 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.369 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.369 "name": "raid_bdev1", 00:08:54.369 "uuid": "47c78df9-86f0-4fcb-843b-ee58f00fbb88", 00:08:54.369 "strip_size_kb": 0, 00:08:54.369 "state": "online", 00:08:54.369 "raid_level": "raid1", 00:08:54.369 "superblock": true, 00:08:54.369 "num_base_bdevs": 2, 00:08:54.369 "num_base_bdevs_discovered": 2, 00:08:54.369 "num_base_bdevs_operational": 2, 00:08:54.369 "base_bdevs_list": [ 00:08:54.369 { 00:08:54.369 "name": "BaseBdev1", 00:08:54.369 "uuid": "ca0de296-5ac4-5265-b5b7-19f5ed3409c9", 00:08:54.369 "is_configured": true, 00:08:54.369 "data_offset": 2048, 00:08:54.369 "data_size": 63488 00:08:54.369 }, 00:08:54.369 { 00:08:54.369 "name": "BaseBdev2", 00:08:54.369 "uuid": "4e65407a-2cf5-5442-b117-c455f49527bd", 00:08:54.369 "is_configured": true, 00:08:54.369 "data_offset": 2048, 00:08:54.369 "data_size": 63488 00:08:54.369 } 00:08:54.369 ] 00:08:54.369 }' 00:08:54.369 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.369 08:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.628 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:54.628 08:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:54.895 [2024-12-13 08:20:07.068514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.845 [2024-12-13 08:20:07.985250] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:55.845 [2024-12-13 08:20:07.985402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.845 [2024-12-13 08:20:07.985651] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.845 08:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.846 08:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.846 08:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.846 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.846 08:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.846 "name": "raid_bdev1", 00:08:55.846 "uuid": "47c78df9-86f0-4fcb-843b-ee58f00fbb88", 00:08:55.846 "strip_size_kb": 0, 00:08:55.846 "state": "online", 00:08:55.846 "raid_level": "raid1", 00:08:55.846 "superblock": true, 00:08:55.846 "num_base_bdevs": 2, 00:08:55.846 "num_base_bdevs_discovered": 1, 00:08:55.846 "num_base_bdevs_operational": 1, 00:08:55.846 "base_bdevs_list": [ 00:08:55.846 { 00:08:55.846 "name": null, 00:08:55.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.846 "is_configured": false, 00:08:55.846 "data_offset": 0, 00:08:55.846 "data_size": 63488 00:08:55.846 }, 00:08:55.846 { 00:08:55.846 "name": "BaseBdev2", 00:08:55.846 "uuid": "4e65407a-2cf5-5442-b117-c455f49527bd", 00:08:55.846 "is_configured": true, 00:08:55.846 "data_offset": 2048, 00:08:55.846 "data_size": 63488 00:08:55.846 } 00:08:55.846 ] 00:08:55.846 }' 00:08:55.846 08:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.846 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.104 08:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.104 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.104 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.362 [2024-12-13 08:20:08.471497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.362 [2024-12-13 08:20:08.471602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.362 [2024-12-13 08:20:08.474407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.362 [2024-12-13 08:20:08.474482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.362 [2024-12-13 08:20:08.474546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.362 [2024-12-13 08:20:08.474558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:56.362 { 00:08:56.362 "results": [ 00:08:56.362 { 00:08:56.362 "job": "raid_bdev1", 00:08:56.362 "core_mask": "0x1", 00:08:56.362 "workload": "randrw", 00:08:56.362 "percentage": 50, 00:08:56.362 "status": "finished", 00:08:56.362 "queue_depth": 1, 00:08:56.362 "io_size": 131072, 00:08:56.362 "runtime": 1.40391, 00:08:56.362 "iops": 20034.760062966998, 00:08:56.362 "mibps": 2504.3450078708747, 00:08:56.362 "io_failed": 0, 00:08:56.362 "io_timeout": 0, 00:08:56.362 "avg_latency_us": 47.164127150667056, 00:08:56.362 "min_latency_us": 22.805240174672488, 00:08:56.362 "max_latency_us": 1337.907423580786 00:08:56.362 } 00:08:56.362 ], 00:08:56.362 "core_count": 1 00:08:56.362 } 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63824 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63824 ']' 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63824 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63824 00:08:56.362 killing process with pid 63824 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63824' 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63824 00:08:56.362 [2024-12-13 08:20:08.520766] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.362 08:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63824 00:08:56.363 [2024-12-13 08:20:08.661937] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2sLAWs4Wg7 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:57.737 ************************************ 00:08:57.737 END TEST raid_write_error_test 00:08:57.737 ************************************ 00:08:57.737 00:08:57.737 real 0m4.427s 00:08:57.737 user 0m5.333s 00:08:57.737 sys 0m0.521s 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.737 08:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.737 08:20:09 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:57.737 08:20:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:57.737 08:20:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:57.737 08:20:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.737 08:20:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.737 08:20:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.737 ************************************ 00:08:57.737 START TEST raid_state_function_test 00:08:57.737 ************************************ 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63968 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.737 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63968' 00:08:57.737 Process raid pid: 63968 00:08:57.738 08:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63968 00:08:57.738 08:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63968 ']' 00:08:57.738 08:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.738 08:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.738 08:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.738 08:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.738 08:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.738 [2024-12-13 08:20:10.066257] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:57.738 [2024-12-13 08:20:10.066482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.997 [2024-12-13 08:20:10.222490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.997 [2024-12-13 08:20:10.344798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.256 [2024-12-13 08:20:10.551719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.256 [2024-12-13 08:20:10.551847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.823 [2024-12-13 08:20:10.944839] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.823 [2024-12-13 08:20:10.944936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.823 [2024-12-13 08:20:10.944966] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.823 [2024-12-13 08:20:10.944990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.823 [2024-12-13 08:20:10.944999] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.823 [2024-12-13 08:20:10.945007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.823 "name": "Existed_Raid", 00:08:58.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.823 "strip_size_kb": 64, 00:08:58.823 "state": "configuring", 00:08:58.823 "raid_level": "raid0", 00:08:58.823 "superblock": false, 00:08:58.823 "num_base_bdevs": 3, 00:08:58.823 "num_base_bdevs_discovered": 0, 00:08:58.823 "num_base_bdevs_operational": 3, 00:08:58.823 "base_bdevs_list": [ 00:08:58.823 { 00:08:58.823 "name": "BaseBdev1", 00:08:58.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.823 "is_configured": false, 00:08:58.823 "data_offset": 0, 00:08:58.823 "data_size": 0 00:08:58.823 }, 00:08:58.823 { 00:08:58.823 "name": "BaseBdev2", 00:08:58.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.823 "is_configured": false, 00:08:58.823 "data_offset": 0, 00:08:58.823 "data_size": 0 00:08:58.823 }, 00:08:58.823 { 00:08:58.823 "name": "BaseBdev3", 00:08:58.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.823 "is_configured": false, 00:08:58.823 "data_offset": 0, 00:08:58.823 "data_size": 0 00:08:58.823 } 00:08:58.823 ] 00:08:58.823 }' 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.823 08:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.082 [2024-12-13 08:20:11.348158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.082 [2024-12-13 08:20:11.348241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.082 [2024-12-13 08:20:11.360130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.082 [2024-12-13 08:20:11.360213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.082 [2024-12-13 08:20:11.360243] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.082 [2024-12-13 08:20:11.360268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.082 [2024-12-13 08:20:11.360330] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.082 [2024-12-13 08:20:11.360357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.082 [2024-12-13 08:20:11.407203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.082 BaseBdev1 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.082 [ 00:08:59.082 { 00:08:59.082 "name": "BaseBdev1", 00:08:59.082 "aliases": [ 00:08:59.082 "6b685d58-45dd-4ee4-a370-38e6a18dac3f" 00:08:59.082 ], 00:08:59.082 "product_name": "Malloc disk", 00:08:59.082 "block_size": 512, 00:08:59.082 "num_blocks": 65536, 00:08:59.082 "uuid": "6b685d58-45dd-4ee4-a370-38e6a18dac3f", 00:08:59.082 "assigned_rate_limits": { 00:08:59.082 "rw_ios_per_sec": 0, 00:08:59.082 "rw_mbytes_per_sec": 0, 00:08:59.082 "r_mbytes_per_sec": 0, 00:08:59.082 "w_mbytes_per_sec": 0 00:08:59.082 }, 00:08:59.082 "claimed": true, 00:08:59.082 "claim_type": "exclusive_write", 00:08:59.082 "zoned": false, 00:08:59.082 "supported_io_types": { 00:08:59.082 "read": true, 00:08:59.082 "write": true, 00:08:59.082 "unmap": true, 00:08:59.082 "flush": true, 00:08:59.082 "reset": true, 00:08:59.082 "nvme_admin": false, 00:08:59.082 "nvme_io": false, 00:08:59.082 "nvme_io_md": false, 00:08:59.082 "write_zeroes": true, 00:08:59.082 "zcopy": true, 00:08:59.082 "get_zone_info": false, 00:08:59.082 "zone_management": false, 00:08:59.082 "zone_append": false, 00:08:59.082 "compare": false, 00:08:59.082 "compare_and_write": false, 00:08:59.082 "abort": true, 00:08:59.082 "seek_hole": false, 00:08:59.082 "seek_data": false, 00:08:59.082 "copy": true, 00:08:59.082 "nvme_iov_md": false 00:08:59.082 }, 00:08:59.082 "memory_domains": [ 00:08:59.082 { 00:08:59.082 "dma_device_id": "system", 00:08:59.082 "dma_device_type": 1 00:08:59.082 }, 00:08:59.082 { 00:08:59.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.082 "dma_device_type": 2 00:08:59.082 } 00:08:59.082 ], 00:08:59.082 "driver_specific": {} 00:08:59.082 } 00:08:59.082 ] 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.082 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.341 "name": "Existed_Raid", 00:08:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.341 "strip_size_kb": 64, 00:08:59.341 "state": "configuring", 00:08:59.341 "raid_level": "raid0", 00:08:59.341 "superblock": false, 00:08:59.341 "num_base_bdevs": 3, 00:08:59.341 "num_base_bdevs_discovered": 1, 00:08:59.341 "num_base_bdevs_operational": 3, 00:08:59.341 "base_bdevs_list": [ 00:08:59.341 { 00:08:59.341 "name": "BaseBdev1", 00:08:59.341 "uuid": "6b685d58-45dd-4ee4-a370-38e6a18dac3f", 00:08:59.341 "is_configured": true, 00:08:59.341 "data_offset": 0, 00:08:59.341 "data_size": 65536 00:08:59.341 }, 00:08:59.341 { 00:08:59.341 "name": "BaseBdev2", 00:08:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.341 "is_configured": false, 00:08:59.341 "data_offset": 0, 00:08:59.341 "data_size": 0 00:08:59.341 }, 00:08:59.341 { 00:08:59.341 "name": "BaseBdev3", 00:08:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.341 "is_configured": false, 00:08:59.341 "data_offset": 0, 00:08:59.341 "data_size": 0 00:08:59.341 } 00:08:59.341 ] 00:08:59.341 }' 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.341 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.600 [2024-12-13 08:20:11.826534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.600 [2024-12-13 08:20:11.826632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.600 [2024-12-13 08:20:11.838541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.600 [2024-12-13 08:20:11.840383] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.600 [2024-12-13 08:20:11.840477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.600 [2024-12-13 08:20:11.840492] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.600 [2024-12-13 08:20:11.840501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.600 "name": "Existed_Raid", 00:08:59.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.600 "strip_size_kb": 64, 00:08:59.600 "state": "configuring", 00:08:59.600 "raid_level": "raid0", 00:08:59.600 "superblock": false, 00:08:59.600 "num_base_bdevs": 3, 00:08:59.600 "num_base_bdevs_discovered": 1, 00:08:59.600 "num_base_bdevs_operational": 3, 00:08:59.600 "base_bdevs_list": [ 00:08:59.600 { 00:08:59.600 "name": "BaseBdev1", 00:08:59.600 "uuid": "6b685d58-45dd-4ee4-a370-38e6a18dac3f", 00:08:59.600 "is_configured": true, 00:08:59.600 "data_offset": 0, 00:08:59.600 "data_size": 65536 00:08:59.600 }, 00:08:59.600 { 00:08:59.600 "name": "BaseBdev2", 00:08:59.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.600 "is_configured": false, 00:08:59.600 "data_offset": 0, 00:08:59.600 "data_size": 0 00:08:59.600 }, 00:08:59.600 { 00:08:59.600 "name": "BaseBdev3", 00:08:59.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.600 "is_configured": false, 00:08:59.600 "data_offset": 0, 00:08:59.600 "data_size": 0 00:08:59.600 } 00:08:59.600 ] 00:08:59.600 }' 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.600 08:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.167 [2024-12-13 08:20:12.303724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.167 BaseBdev2 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.167 [ 00:09:00.167 { 00:09:00.167 "name": "BaseBdev2", 00:09:00.167 "aliases": [ 00:09:00.167 "8d0a0e26-46a8-4290-a4ef-f459de60c156" 00:09:00.167 ], 00:09:00.167 "product_name": "Malloc disk", 00:09:00.167 "block_size": 512, 00:09:00.167 "num_blocks": 65536, 00:09:00.167 "uuid": "8d0a0e26-46a8-4290-a4ef-f459de60c156", 00:09:00.167 "assigned_rate_limits": { 00:09:00.167 "rw_ios_per_sec": 0, 00:09:00.167 "rw_mbytes_per_sec": 0, 00:09:00.167 "r_mbytes_per_sec": 0, 00:09:00.167 "w_mbytes_per_sec": 0 00:09:00.167 }, 00:09:00.167 "claimed": true, 00:09:00.167 "claim_type": "exclusive_write", 00:09:00.167 "zoned": false, 00:09:00.167 "supported_io_types": { 00:09:00.167 "read": true, 00:09:00.167 "write": true, 00:09:00.167 "unmap": true, 00:09:00.167 "flush": true, 00:09:00.167 "reset": true, 00:09:00.167 "nvme_admin": false, 00:09:00.167 "nvme_io": false, 00:09:00.167 "nvme_io_md": false, 00:09:00.167 "write_zeroes": true, 00:09:00.167 "zcopy": true, 00:09:00.167 "get_zone_info": false, 00:09:00.167 "zone_management": false, 00:09:00.167 "zone_append": false, 00:09:00.167 "compare": false, 00:09:00.167 "compare_and_write": false, 00:09:00.167 "abort": true, 00:09:00.167 "seek_hole": false, 00:09:00.167 "seek_data": false, 00:09:00.167 "copy": true, 00:09:00.167 "nvme_iov_md": false 00:09:00.167 }, 00:09:00.167 "memory_domains": [ 00:09:00.167 { 00:09:00.167 "dma_device_id": "system", 00:09:00.167 "dma_device_type": 1 00:09:00.167 }, 00:09:00.167 { 00:09:00.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.167 "dma_device_type": 2 00:09:00.167 } 00:09:00.167 ], 00:09:00.167 "driver_specific": {} 00:09:00.167 } 00:09:00.167 ] 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.167 "name": "Existed_Raid", 00:09:00.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.167 "strip_size_kb": 64, 00:09:00.167 "state": "configuring", 00:09:00.167 "raid_level": "raid0", 00:09:00.167 "superblock": false, 00:09:00.167 "num_base_bdevs": 3, 00:09:00.167 "num_base_bdevs_discovered": 2, 00:09:00.167 "num_base_bdevs_operational": 3, 00:09:00.167 "base_bdevs_list": [ 00:09:00.167 { 00:09:00.167 "name": "BaseBdev1", 00:09:00.167 "uuid": "6b685d58-45dd-4ee4-a370-38e6a18dac3f", 00:09:00.167 "is_configured": true, 00:09:00.167 "data_offset": 0, 00:09:00.167 "data_size": 65536 00:09:00.167 }, 00:09:00.167 { 00:09:00.167 "name": "BaseBdev2", 00:09:00.167 "uuid": "8d0a0e26-46a8-4290-a4ef-f459de60c156", 00:09:00.167 "is_configured": true, 00:09:00.167 "data_offset": 0, 00:09:00.167 "data_size": 65536 00:09:00.167 }, 00:09:00.167 { 00:09:00.167 "name": "BaseBdev3", 00:09:00.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.167 "is_configured": false, 00:09:00.167 "data_offset": 0, 00:09:00.167 "data_size": 0 00:09:00.167 } 00:09:00.167 ] 00:09:00.167 }' 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.167 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.425 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.425 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.425 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 [2024-12-13 08:20:12.827467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.684 [2024-12-13 08:20:12.827520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.684 [2024-12-13 08:20:12.827533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:00.684 [2024-12-13 08:20:12.827925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.684 [2024-12-13 08:20:12.828123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.684 [2024-12-13 08:20:12.828135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:00.684 [2024-12-13 08:20:12.828471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.684 BaseBdev3 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 [ 00:09:00.684 { 00:09:00.684 "name": "BaseBdev3", 00:09:00.684 "aliases": [ 00:09:00.684 "d97e4d3b-e43d-4fc0-bdbb-0ff1a1d46438" 00:09:00.684 ], 00:09:00.684 "product_name": "Malloc disk", 00:09:00.684 "block_size": 512, 00:09:00.684 "num_blocks": 65536, 00:09:00.684 "uuid": "d97e4d3b-e43d-4fc0-bdbb-0ff1a1d46438", 00:09:00.684 "assigned_rate_limits": { 00:09:00.684 "rw_ios_per_sec": 0, 00:09:00.684 "rw_mbytes_per_sec": 0, 00:09:00.684 "r_mbytes_per_sec": 0, 00:09:00.684 "w_mbytes_per_sec": 0 00:09:00.684 }, 00:09:00.684 "claimed": true, 00:09:00.684 "claim_type": "exclusive_write", 00:09:00.684 "zoned": false, 00:09:00.684 "supported_io_types": { 00:09:00.684 "read": true, 00:09:00.684 "write": true, 00:09:00.684 "unmap": true, 00:09:00.684 "flush": true, 00:09:00.684 "reset": true, 00:09:00.684 "nvme_admin": false, 00:09:00.684 "nvme_io": false, 00:09:00.684 "nvme_io_md": false, 00:09:00.684 "write_zeroes": true, 00:09:00.684 "zcopy": true, 00:09:00.684 "get_zone_info": false, 00:09:00.684 "zone_management": false, 00:09:00.684 "zone_append": false, 00:09:00.684 "compare": false, 00:09:00.684 "compare_and_write": false, 00:09:00.684 "abort": true, 00:09:00.684 "seek_hole": false, 00:09:00.684 "seek_data": false, 00:09:00.684 "copy": true, 00:09:00.684 "nvme_iov_md": false 00:09:00.684 }, 00:09:00.684 "memory_domains": [ 00:09:00.684 { 00:09:00.684 "dma_device_id": "system", 00:09:00.684 "dma_device_type": 1 00:09:00.684 }, 00:09:00.684 { 00:09:00.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.684 "dma_device_type": 2 00:09:00.684 } 00:09:00.684 ], 00:09:00.684 "driver_specific": {} 00:09:00.684 } 00:09:00.684 ] 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.684 "name": "Existed_Raid", 00:09:00.684 "uuid": "2a653ff3-0065-4740-a0c7-eb5cc99e4e9f", 00:09:00.684 "strip_size_kb": 64, 00:09:00.684 "state": "online", 00:09:00.684 "raid_level": "raid0", 00:09:00.684 "superblock": false, 00:09:00.684 "num_base_bdevs": 3, 00:09:00.684 "num_base_bdevs_discovered": 3, 00:09:00.684 "num_base_bdevs_operational": 3, 00:09:00.684 "base_bdevs_list": [ 00:09:00.684 { 00:09:00.684 "name": "BaseBdev1", 00:09:00.684 "uuid": "6b685d58-45dd-4ee4-a370-38e6a18dac3f", 00:09:00.684 "is_configured": true, 00:09:00.684 "data_offset": 0, 00:09:00.684 "data_size": 65536 00:09:00.684 }, 00:09:00.684 { 00:09:00.684 "name": "BaseBdev2", 00:09:00.684 "uuid": "8d0a0e26-46a8-4290-a4ef-f459de60c156", 00:09:00.684 "is_configured": true, 00:09:00.684 "data_offset": 0, 00:09:00.684 "data_size": 65536 00:09:00.684 }, 00:09:00.684 { 00:09:00.684 "name": "BaseBdev3", 00:09:00.684 "uuid": "d97e4d3b-e43d-4fc0-bdbb-0ff1a1d46438", 00:09:00.684 "is_configured": true, 00:09:00.684 "data_offset": 0, 00:09:00.684 "data_size": 65536 00:09:00.684 } 00:09:00.684 ] 00:09:00.684 }' 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.684 08:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.943 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.201 [2024-12-13 08:20:13.307094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.201 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.202 "name": "Existed_Raid", 00:09:01.202 "aliases": [ 00:09:01.202 "2a653ff3-0065-4740-a0c7-eb5cc99e4e9f" 00:09:01.202 ], 00:09:01.202 "product_name": "Raid Volume", 00:09:01.202 "block_size": 512, 00:09:01.202 "num_blocks": 196608, 00:09:01.202 "uuid": "2a653ff3-0065-4740-a0c7-eb5cc99e4e9f", 00:09:01.202 "assigned_rate_limits": { 00:09:01.202 "rw_ios_per_sec": 0, 00:09:01.202 "rw_mbytes_per_sec": 0, 00:09:01.202 "r_mbytes_per_sec": 0, 00:09:01.202 "w_mbytes_per_sec": 0 00:09:01.202 }, 00:09:01.202 "claimed": false, 00:09:01.202 "zoned": false, 00:09:01.202 "supported_io_types": { 00:09:01.202 "read": true, 00:09:01.202 "write": true, 00:09:01.202 "unmap": true, 00:09:01.202 "flush": true, 00:09:01.202 "reset": true, 00:09:01.202 "nvme_admin": false, 00:09:01.202 "nvme_io": false, 00:09:01.202 "nvme_io_md": false, 00:09:01.202 "write_zeroes": true, 00:09:01.202 "zcopy": false, 00:09:01.202 "get_zone_info": false, 00:09:01.202 "zone_management": false, 00:09:01.202 "zone_append": false, 00:09:01.202 "compare": false, 00:09:01.202 "compare_and_write": false, 00:09:01.202 "abort": false, 00:09:01.202 "seek_hole": false, 00:09:01.202 "seek_data": false, 00:09:01.202 "copy": false, 00:09:01.202 "nvme_iov_md": false 00:09:01.202 }, 00:09:01.202 "memory_domains": [ 00:09:01.202 { 00:09:01.202 "dma_device_id": "system", 00:09:01.202 "dma_device_type": 1 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.202 "dma_device_type": 2 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "dma_device_id": "system", 00:09:01.202 "dma_device_type": 1 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.202 "dma_device_type": 2 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "dma_device_id": "system", 00:09:01.202 "dma_device_type": 1 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.202 "dma_device_type": 2 00:09:01.202 } 00:09:01.202 ], 00:09:01.202 "driver_specific": { 00:09:01.202 "raid": { 00:09:01.202 "uuid": "2a653ff3-0065-4740-a0c7-eb5cc99e4e9f", 00:09:01.202 "strip_size_kb": 64, 00:09:01.202 "state": "online", 00:09:01.202 "raid_level": "raid0", 00:09:01.202 "superblock": false, 00:09:01.202 "num_base_bdevs": 3, 00:09:01.202 "num_base_bdevs_discovered": 3, 00:09:01.202 "num_base_bdevs_operational": 3, 00:09:01.202 "base_bdevs_list": [ 00:09:01.202 { 00:09:01.202 "name": "BaseBdev1", 00:09:01.202 "uuid": "6b685d58-45dd-4ee4-a370-38e6a18dac3f", 00:09:01.202 "is_configured": true, 00:09:01.202 "data_offset": 0, 00:09:01.202 "data_size": 65536 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "name": "BaseBdev2", 00:09:01.202 "uuid": "8d0a0e26-46a8-4290-a4ef-f459de60c156", 00:09:01.202 "is_configured": true, 00:09:01.202 "data_offset": 0, 00:09:01.202 "data_size": 65536 00:09:01.202 }, 00:09:01.202 { 00:09:01.202 "name": "BaseBdev3", 00:09:01.202 "uuid": "d97e4d3b-e43d-4fc0-bdbb-0ff1a1d46438", 00:09:01.202 "is_configured": true, 00:09:01.202 "data_offset": 0, 00:09:01.202 "data_size": 65536 00:09:01.202 } 00:09:01.202 ] 00:09:01.202 } 00:09:01.202 } 00:09:01.202 }' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:01.202 BaseBdev2 00:09:01.202 BaseBdev3' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.202 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.461 [2024-12-13 08:20:13.586348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.461 [2024-12-13 08:20:13.586444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.461 [2024-12-13 08:20:13.586533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.461 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.461 "name": "Existed_Raid", 00:09:01.461 "uuid": "2a653ff3-0065-4740-a0c7-eb5cc99e4e9f", 00:09:01.461 "strip_size_kb": 64, 00:09:01.461 "state": "offline", 00:09:01.461 "raid_level": "raid0", 00:09:01.461 "superblock": false, 00:09:01.461 "num_base_bdevs": 3, 00:09:01.461 "num_base_bdevs_discovered": 2, 00:09:01.461 "num_base_bdevs_operational": 2, 00:09:01.461 "base_bdevs_list": [ 00:09:01.461 { 00:09:01.461 "name": null, 00:09:01.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.461 "is_configured": false, 00:09:01.461 "data_offset": 0, 00:09:01.462 "data_size": 65536 00:09:01.462 }, 00:09:01.462 { 00:09:01.462 "name": "BaseBdev2", 00:09:01.462 "uuid": "8d0a0e26-46a8-4290-a4ef-f459de60c156", 00:09:01.462 "is_configured": true, 00:09:01.462 "data_offset": 0, 00:09:01.462 "data_size": 65536 00:09:01.462 }, 00:09:01.462 { 00:09:01.462 "name": "BaseBdev3", 00:09:01.462 "uuid": "d97e4d3b-e43d-4fc0-bdbb-0ff1a1d46438", 00:09:01.462 "is_configured": true, 00:09:01.462 "data_offset": 0, 00:09:01.462 "data_size": 65536 00:09:01.462 } 00:09:01.462 ] 00:09:01.462 }' 00:09:01.462 08:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.462 08:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.028 [2024-12-13 08:20:14.210232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.028 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.028 [2024-12-13 08:20:14.371707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.028 [2024-12-13 08:20:14.371766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.286 BaseBdev2 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.286 [ 00:09:02.286 { 00:09:02.286 "name": "BaseBdev2", 00:09:02.286 "aliases": [ 00:09:02.286 "5f58f513-277b-4beb-af55-9c66100e00c2" 00:09:02.286 ], 00:09:02.286 "product_name": "Malloc disk", 00:09:02.286 "block_size": 512, 00:09:02.286 "num_blocks": 65536, 00:09:02.286 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:02.286 "assigned_rate_limits": { 00:09:02.286 "rw_ios_per_sec": 0, 00:09:02.286 "rw_mbytes_per_sec": 0, 00:09:02.286 "r_mbytes_per_sec": 0, 00:09:02.286 "w_mbytes_per_sec": 0 00:09:02.286 }, 00:09:02.286 "claimed": false, 00:09:02.286 "zoned": false, 00:09:02.286 "supported_io_types": { 00:09:02.286 "read": true, 00:09:02.286 "write": true, 00:09:02.286 "unmap": true, 00:09:02.286 "flush": true, 00:09:02.286 "reset": true, 00:09:02.286 "nvme_admin": false, 00:09:02.286 "nvme_io": false, 00:09:02.286 "nvme_io_md": false, 00:09:02.286 "write_zeroes": true, 00:09:02.286 "zcopy": true, 00:09:02.286 "get_zone_info": false, 00:09:02.286 "zone_management": false, 00:09:02.286 "zone_append": false, 00:09:02.286 "compare": false, 00:09:02.286 "compare_and_write": false, 00:09:02.286 "abort": true, 00:09:02.286 "seek_hole": false, 00:09:02.286 "seek_data": false, 00:09:02.286 "copy": true, 00:09:02.286 "nvme_iov_md": false 00:09:02.286 }, 00:09:02.286 "memory_domains": [ 00:09:02.286 { 00:09:02.286 "dma_device_id": "system", 00:09:02.286 "dma_device_type": 1 00:09:02.286 }, 00:09:02.286 { 00:09:02.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.286 "dma_device_type": 2 00:09:02.286 } 00:09:02.286 ], 00:09:02.286 "driver_specific": {} 00:09:02.286 } 00:09:02.286 ] 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.286 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.287 BaseBdev3 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.287 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.545 [ 00:09:02.545 { 00:09:02.545 "name": "BaseBdev3", 00:09:02.545 "aliases": [ 00:09:02.545 "45944ceb-93b6-4c3b-9747-af45419615e7" 00:09:02.545 ], 00:09:02.545 "product_name": "Malloc disk", 00:09:02.545 "block_size": 512, 00:09:02.545 "num_blocks": 65536, 00:09:02.545 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:02.545 "assigned_rate_limits": { 00:09:02.545 "rw_ios_per_sec": 0, 00:09:02.545 "rw_mbytes_per_sec": 0, 00:09:02.545 "r_mbytes_per_sec": 0, 00:09:02.545 "w_mbytes_per_sec": 0 00:09:02.545 }, 00:09:02.545 "claimed": false, 00:09:02.545 "zoned": false, 00:09:02.545 "supported_io_types": { 00:09:02.545 "read": true, 00:09:02.545 "write": true, 00:09:02.545 "unmap": true, 00:09:02.545 "flush": true, 00:09:02.545 "reset": true, 00:09:02.545 "nvme_admin": false, 00:09:02.545 "nvme_io": false, 00:09:02.545 "nvme_io_md": false, 00:09:02.545 "write_zeroes": true, 00:09:02.545 "zcopy": true, 00:09:02.545 "get_zone_info": false, 00:09:02.545 "zone_management": false, 00:09:02.545 "zone_append": false, 00:09:02.545 "compare": false, 00:09:02.545 "compare_and_write": false, 00:09:02.545 "abort": true, 00:09:02.545 "seek_hole": false, 00:09:02.545 "seek_data": false, 00:09:02.545 "copy": true, 00:09:02.545 "nvme_iov_md": false 00:09:02.545 }, 00:09:02.545 "memory_domains": [ 00:09:02.545 { 00:09:02.545 "dma_device_id": "system", 00:09:02.545 "dma_device_type": 1 00:09:02.545 }, 00:09:02.545 { 00:09:02.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.545 "dma_device_type": 2 00:09:02.545 } 00:09:02.545 ], 00:09:02.545 "driver_specific": {} 00:09:02.545 } 00:09:02.545 ] 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.545 [2024-12-13 08:20:14.687483] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.545 [2024-12-13 08:20:14.687586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.545 [2024-12-13 08:20:14.687641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.545 [2024-12-13 08:20:14.689610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.545 "name": "Existed_Raid", 00:09:02.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.545 "strip_size_kb": 64, 00:09:02.545 "state": "configuring", 00:09:02.545 "raid_level": "raid0", 00:09:02.545 "superblock": false, 00:09:02.545 "num_base_bdevs": 3, 00:09:02.545 "num_base_bdevs_discovered": 2, 00:09:02.545 "num_base_bdevs_operational": 3, 00:09:02.545 "base_bdevs_list": [ 00:09:02.545 { 00:09:02.545 "name": "BaseBdev1", 00:09:02.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.545 "is_configured": false, 00:09:02.545 "data_offset": 0, 00:09:02.545 "data_size": 0 00:09:02.545 }, 00:09:02.545 { 00:09:02.545 "name": "BaseBdev2", 00:09:02.545 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:02.545 "is_configured": true, 00:09:02.545 "data_offset": 0, 00:09:02.545 "data_size": 65536 00:09:02.545 }, 00:09:02.545 { 00:09:02.545 "name": "BaseBdev3", 00:09:02.545 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:02.545 "is_configured": true, 00:09:02.545 "data_offset": 0, 00:09:02.545 "data_size": 65536 00:09:02.545 } 00:09:02.545 ] 00:09:02.545 }' 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.545 08:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.112 [2024-12-13 08:20:15.182674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.112 "name": "Existed_Raid", 00:09:03.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.112 "strip_size_kb": 64, 00:09:03.112 "state": "configuring", 00:09:03.112 "raid_level": "raid0", 00:09:03.112 "superblock": false, 00:09:03.112 "num_base_bdevs": 3, 00:09:03.112 "num_base_bdevs_discovered": 1, 00:09:03.112 "num_base_bdevs_operational": 3, 00:09:03.112 "base_bdevs_list": [ 00:09:03.112 { 00:09:03.112 "name": "BaseBdev1", 00:09:03.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.112 "is_configured": false, 00:09:03.112 "data_offset": 0, 00:09:03.112 "data_size": 0 00:09:03.112 }, 00:09:03.112 { 00:09:03.112 "name": null, 00:09:03.112 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:03.112 "is_configured": false, 00:09:03.112 "data_offset": 0, 00:09:03.112 "data_size": 65536 00:09:03.112 }, 00:09:03.112 { 00:09:03.112 "name": "BaseBdev3", 00:09:03.112 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:03.112 "is_configured": true, 00:09:03.112 "data_offset": 0, 00:09:03.112 "data_size": 65536 00:09:03.112 } 00:09:03.112 ] 00:09:03.112 }' 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.112 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.373 [2024-12-13 08:20:15.683072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.373 BaseBdev1 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.373 [ 00:09:03.373 { 00:09:03.373 "name": "BaseBdev1", 00:09:03.373 "aliases": [ 00:09:03.373 "3088a544-cab3-4b42-991d-019b6df831ab" 00:09:03.373 ], 00:09:03.373 "product_name": "Malloc disk", 00:09:03.373 "block_size": 512, 00:09:03.373 "num_blocks": 65536, 00:09:03.373 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:03.373 "assigned_rate_limits": { 00:09:03.373 "rw_ios_per_sec": 0, 00:09:03.373 "rw_mbytes_per_sec": 0, 00:09:03.373 "r_mbytes_per_sec": 0, 00:09:03.373 "w_mbytes_per_sec": 0 00:09:03.373 }, 00:09:03.373 "claimed": true, 00:09:03.373 "claim_type": "exclusive_write", 00:09:03.373 "zoned": false, 00:09:03.373 "supported_io_types": { 00:09:03.373 "read": true, 00:09:03.373 "write": true, 00:09:03.373 "unmap": true, 00:09:03.373 "flush": true, 00:09:03.373 "reset": true, 00:09:03.373 "nvme_admin": false, 00:09:03.373 "nvme_io": false, 00:09:03.373 "nvme_io_md": false, 00:09:03.373 "write_zeroes": true, 00:09:03.373 "zcopy": true, 00:09:03.373 "get_zone_info": false, 00:09:03.373 "zone_management": false, 00:09:03.373 "zone_append": false, 00:09:03.373 "compare": false, 00:09:03.373 "compare_and_write": false, 00:09:03.373 "abort": true, 00:09:03.373 "seek_hole": false, 00:09:03.373 "seek_data": false, 00:09:03.373 "copy": true, 00:09:03.373 "nvme_iov_md": false 00:09:03.373 }, 00:09:03.373 "memory_domains": [ 00:09:03.373 { 00:09:03.373 "dma_device_id": "system", 00:09:03.373 "dma_device_type": 1 00:09:03.373 }, 00:09:03.373 { 00:09:03.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.373 "dma_device_type": 2 00:09:03.373 } 00:09:03.373 ], 00:09:03.373 "driver_specific": {} 00:09:03.373 } 00:09:03.373 ] 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.373 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.374 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.632 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.632 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.632 "name": "Existed_Raid", 00:09:03.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.632 "strip_size_kb": 64, 00:09:03.632 "state": "configuring", 00:09:03.632 "raid_level": "raid0", 00:09:03.632 "superblock": false, 00:09:03.632 "num_base_bdevs": 3, 00:09:03.632 "num_base_bdevs_discovered": 2, 00:09:03.632 "num_base_bdevs_operational": 3, 00:09:03.632 "base_bdevs_list": [ 00:09:03.632 { 00:09:03.632 "name": "BaseBdev1", 00:09:03.632 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:03.632 "is_configured": true, 00:09:03.632 "data_offset": 0, 00:09:03.632 "data_size": 65536 00:09:03.632 }, 00:09:03.632 { 00:09:03.632 "name": null, 00:09:03.632 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:03.632 "is_configured": false, 00:09:03.632 "data_offset": 0, 00:09:03.632 "data_size": 65536 00:09:03.632 }, 00:09:03.632 { 00:09:03.632 "name": "BaseBdev3", 00:09:03.632 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:03.632 "is_configured": true, 00:09:03.632 "data_offset": 0, 00:09:03.632 "data_size": 65536 00:09:03.632 } 00:09:03.632 ] 00:09:03.632 }' 00:09:03.632 08:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.632 08:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 [2024-12-13 08:20:16.226394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.890 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.224 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.224 "name": "Existed_Raid", 00:09:04.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.224 "strip_size_kb": 64, 00:09:04.224 "state": "configuring", 00:09:04.224 "raid_level": "raid0", 00:09:04.224 "superblock": false, 00:09:04.224 "num_base_bdevs": 3, 00:09:04.224 "num_base_bdevs_discovered": 1, 00:09:04.224 "num_base_bdevs_operational": 3, 00:09:04.224 "base_bdevs_list": [ 00:09:04.224 { 00:09:04.224 "name": "BaseBdev1", 00:09:04.224 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:04.224 "is_configured": true, 00:09:04.224 "data_offset": 0, 00:09:04.224 "data_size": 65536 00:09:04.224 }, 00:09:04.224 { 00:09:04.224 "name": null, 00:09:04.224 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:04.224 "is_configured": false, 00:09:04.224 "data_offset": 0, 00:09:04.224 "data_size": 65536 00:09:04.224 }, 00:09:04.224 { 00:09:04.224 "name": null, 00:09:04.224 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:04.224 "is_configured": false, 00:09:04.224 "data_offset": 0, 00:09:04.224 "data_size": 65536 00:09:04.224 } 00:09:04.224 ] 00:09:04.224 }' 00:09:04.224 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.224 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.484 [2024-12-13 08:20:16.717583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.484 "name": "Existed_Raid", 00:09:04.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.484 "strip_size_kb": 64, 00:09:04.484 "state": "configuring", 00:09:04.484 "raid_level": "raid0", 00:09:04.484 "superblock": false, 00:09:04.484 "num_base_bdevs": 3, 00:09:04.484 "num_base_bdevs_discovered": 2, 00:09:04.484 "num_base_bdevs_operational": 3, 00:09:04.484 "base_bdevs_list": [ 00:09:04.484 { 00:09:04.484 "name": "BaseBdev1", 00:09:04.484 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:04.484 "is_configured": true, 00:09:04.484 "data_offset": 0, 00:09:04.484 "data_size": 65536 00:09:04.484 }, 00:09:04.484 { 00:09:04.484 "name": null, 00:09:04.484 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:04.484 "is_configured": false, 00:09:04.484 "data_offset": 0, 00:09:04.484 "data_size": 65536 00:09:04.484 }, 00:09:04.484 { 00:09:04.484 "name": "BaseBdev3", 00:09:04.484 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:04.484 "is_configured": true, 00:09:04.484 "data_offset": 0, 00:09:04.484 "data_size": 65536 00:09:04.484 } 00:09:04.484 ] 00:09:04.484 }' 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.484 08:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.050 [2024-12-13 08:20:17.280615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.050 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.308 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.308 "name": "Existed_Raid", 00:09:05.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.308 "strip_size_kb": 64, 00:09:05.308 "state": "configuring", 00:09:05.308 "raid_level": "raid0", 00:09:05.308 "superblock": false, 00:09:05.308 "num_base_bdevs": 3, 00:09:05.308 "num_base_bdevs_discovered": 1, 00:09:05.308 "num_base_bdevs_operational": 3, 00:09:05.308 "base_bdevs_list": [ 00:09:05.308 { 00:09:05.308 "name": null, 00:09:05.308 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:05.308 "is_configured": false, 00:09:05.308 "data_offset": 0, 00:09:05.308 "data_size": 65536 00:09:05.308 }, 00:09:05.308 { 00:09:05.308 "name": null, 00:09:05.308 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:05.308 "is_configured": false, 00:09:05.308 "data_offset": 0, 00:09:05.308 "data_size": 65536 00:09:05.308 }, 00:09:05.308 { 00:09:05.308 "name": "BaseBdev3", 00:09:05.308 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:05.308 "is_configured": true, 00:09:05.308 "data_offset": 0, 00:09:05.308 "data_size": 65536 00:09:05.308 } 00:09:05.308 ] 00:09:05.308 }' 00:09:05.308 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.308 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.566 [2024-12-13 08:20:17.918481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.566 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.825 "name": "Existed_Raid", 00:09:05.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.825 "strip_size_kb": 64, 00:09:05.825 "state": "configuring", 00:09:05.825 "raid_level": "raid0", 00:09:05.825 "superblock": false, 00:09:05.825 "num_base_bdevs": 3, 00:09:05.825 "num_base_bdevs_discovered": 2, 00:09:05.825 "num_base_bdevs_operational": 3, 00:09:05.825 "base_bdevs_list": [ 00:09:05.825 { 00:09:05.825 "name": null, 00:09:05.825 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:05.825 "is_configured": false, 00:09:05.825 "data_offset": 0, 00:09:05.825 "data_size": 65536 00:09:05.825 }, 00:09:05.825 { 00:09:05.825 "name": "BaseBdev2", 00:09:05.825 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:05.825 "is_configured": true, 00:09:05.825 "data_offset": 0, 00:09:05.825 "data_size": 65536 00:09:05.825 }, 00:09:05.825 { 00:09:05.825 "name": "BaseBdev3", 00:09:05.825 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:05.825 "is_configured": true, 00:09:05.825 "data_offset": 0, 00:09:05.825 "data_size": 65536 00:09:05.825 } 00:09:05.825 ] 00:09:05.825 }' 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.825 08:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.083 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3088a544-cab3-4b42-991d-019b6df831ab 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.342 [2024-12-13 08:20:18.518947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:06.342 [2024-12-13 08:20:18.519076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:06.342 [2024-12-13 08:20:18.519119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:06.342 [2024-12-13 08:20:18.519418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:06.342 [2024-12-13 08:20:18.519610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:06.342 [2024-12-13 08:20:18.519651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:06.342 [2024-12-13 08:20:18.519957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.342 NewBaseBdev 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.342 [ 00:09:06.342 { 00:09:06.342 "name": "NewBaseBdev", 00:09:06.342 "aliases": [ 00:09:06.342 "3088a544-cab3-4b42-991d-019b6df831ab" 00:09:06.342 ], 00:09:06.342 "product_name": "Malloc disk", 00:09:06.342 "block_size": 512, 00:09:06.342 "num_blocks": 65536, 00:09:06.342 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:06.342 "assigned_rate_limits": { 00:09:06.342 "rw_ios_per_sec": 0, 00:09:06.342 "rw_mbytes_per_sec": 0, 00:09:06.342 "r_mbytes_per_sec": 0, 00:09:06.342 "w_mbytes_per_sec": 0 00:09:06.342 }, 00:09:06.342 "claimed": true, 00:09:06.342 "claim_type": "exclusive_write", 00:09:06.342 "zoned": false, 00:09:06.342 "supported_io_types": { 00:09:06.342 "read": true, 00:09:06.342 "write": true, 00:09:06.342 "unmap": true, 00:09:06.342 "flush": true, 00:09:06.342 "reset": true, 00:09:06.342 "nvme_admin": false, 00:09:06.342 "nvme_io": false, 00:09:06.342 "nvme_io_md": false, 00:09:06.342 "write_zeroes": true, 00:09:06.342 "zcopy": true, 00:09:06.342 "get_zone_info": false, 00:09:06.342 "zone_management": false, 00:09:06.342 "zone_append": false, 00:09:06.342 "compare": false, 00:09:06.342 "compare_and_write": false, 00:09:06.342 "abort": true, 00:09:06.342 "seek_hole": false, 00:09:06.342 "seek_data": false, 00:09:06.342 "copy": true, 00:09:06.342 "nvme_iov_md": false 00:09:06.342 }, 00:09:06.342 "memory_domains": [ 00:09:06.342 { 00:09:06.342 "dma_device_id": "system", 00:09:06.342 "dma_device_type": 1 00:09:06.342 }, 00:09:06.342 { 00:09:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.342 "dma_device_type": 2 00:09:06.342 } 00:09:06.342 ], 00:09:06.342 "driver_specific": {} 00:09:06.342 } 00:09:06.342 ] 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.342 "name": "Existed_Raid", 00:09:06.342 "uuid": "e4fe3463-c1ef-46d8-b6f1-5948abf8e28d", 00:09:06.342 "strip_size_kb": 64, 00:09:06.342 "state": "online", 00:09:06.342 "raid_level": "raid0", 00:09:06.342 "superblock": false, 00:09:06.342 "num_base_bdevs": 3, 00:09:06.342 "num_base_bdevs_discovered": 3, 00:09:06.342 "num_base_bdevs_operational": 3, 00:09:06.342 "base_bdevs_list": [ 00:09:06.342 { 00:09:06.342 "name": "NewBaseBdev", 00:09:06.342 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:06.342 "is_configured": true, 00:09:06.342 "data_offset": 0, 00:09:06.342 "data_size": 65536 00:09:06.342 }, 00:09:06.342 { 00:09:06.342 "name": "BaseBdev2", 00:09:06.342 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:06.342 "is_configured": true, 00:09:06.342 "data_offset": 0, 00:09:06.342 "data_size": 65536 00:09:06.342 }, 00:09:06.342 { 00:09:06.342 "name": "BaseBdev3", 00:09:06.342 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:06.342 "is_configured": true, 00:09:06.342 "data_offset": 0, 00:09:06.342 "data_size": 65536 00:09:06.342 } 00:09:06.342 ] 00:09:06.342 }' 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.342 08:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.909 [2024-12-13 08:20:19.018526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.909 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.909 "name": "Existed_Raid", 00:09:06.909 "aliases": [ 00:09:06.909 "e4fe3463-c1ef-46d8-b6f1-5948abf8e28d" 00:09:06.909 ], 00:09:06.909 "product_name": "Raid Volume", 00:09:06.909 "block_size": 512, 00:09:06.909 "num_blocks": 196608, 00:09:06.909 "uuid": "e4fe3463-c1ef-46d8-b6f1-5948abf8e28d", 00:09:06.909 "assigned_rate_limits": { 00:09:06.909 "rw_ios_per_sec": 0, 00:09:06.909 "rw_mbytes_per_sec": 0, 00:09:06.909 "r_mbytes_per_sec": 0, 00:09:06.909 "w_mbytes_per_sec": 0 00:09:06.909 }, 00:09:06.909 "claimed": false, 00:09:06.909 "zoned": false, 00:09:06.909 "supported_io_types": { 00:09:06.909 "read": true, 00:09:06.909 "write": true, 00:09:06.909 "unmap": true, 00:09:06.909 "flush": true, 00:09:06.909 "reset": true, 00:09:06.909 "nvme_admin": false, 00:09:06.909 "nvme_io": false, 00:09:06.909 "nvme_io_md": false, 00:09:06.909 "write_zeroes": true, 00:09:06.909 "zcopy": false, 00:09:06.909 "get_zone_info": false, 00:09:06.909 "zone_management": false, 00:09:06.909 "zone_append": false, 00:09:06.909 "compare": false, 00:09:06.909 "compare_and_write": false, 00:09:06.909 "abort": false, 00:09:06.909 "seek_hole": false, 00:09:06.909 "seek_data": false, 00:09:06.909 "copy": false, 00:09:06.909 "nvme_iov_md": false 00:09:06.909 }, 00:09:06.909 "memory_domains": [ 00:09:06.909 { 00:09:06.909 "dma_device_id": "system", 00:09:06.909 "dma_device_type": 1 00:09:06.909 }, 00:09:06.909 { 00:09:06.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.909 "dma_device_type": 2 00:09:06.909 }, 00:09:06.909 { 00:09:06.909 "dma_device_id": "system", 00:09:06.909 "dma_device_type": 1 00:09:06.909 }, 00:09:06.909 { 00:09:06.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.909 "dma_device_type": 2 00:09:06.909 }, 00:09:06.909 { 00:09:06.909 "dma_device_id": "system", 00:09:06.909 "dma_device_type": 1 00:09:06.909 }, 00:09:06.909 { 00:09:06.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.909 "dma_device_type": 2 00:09:06.909 } 00:09:06.909 ], 00:09:06.909 "driver_specific": { 00:09:06.909 "raid": { 00:09:06.909 "uuid": "e4fe3463-c1ef-46d8-b6f1-5948abf8e28d", 00:09:06.910 "strip_size_kb": 64, 00:09:06.910 "state": "online", 00:09:06.910 "raid_level": "raid0", 00:09:06.910 "superblock": false, 00:09:06.910 "num_base_bdevs": 3, 00:09:06.910 "num_base_bdevs_discovered": 3, 00:09:06.910 "num_base_bdevs_operational": 3, 00:09:06.910 "base_bdevs_list": [ 00:09:06.910 { 00:09:06.910 "name": "NewBaseBdev", 00:09:06.910 "uuid": "3088a544-cab3-4b42-991d-019b6df831ab", 00:09:06.910 "is_configured": true, 00:09:06.910 "data_offset": 0, 00:09:06.910 "data_size": 65536 00:09:06.910 }, 00:09:06.910 { 00:09:06.910 "name": "BaseBdev2", 00:09:06.910 "uuid": "5f58f513-277b-4beb-af55-9c66100e00c2", 00:09:06.910 "is_configured": true, 00:09:06.910 "data_offset": 0, 00:09:06.910 "data_size": 65536 00:09:06.910 }, 00:09:06.910 { 00:09:06.910 "name": "BaseBdev3", 00:09:06.910 "uuid": "45944ceb-93b6-4c3b-9747-af45419615e7", 00:09:06.910 "is_configured": true, 00:09:06.910 "data_offset": 0, 00:09:06.910 "data_size": 65536 00:09:06.910 } 00:09:06.910 ] 00:09:06.910 } 00:09:06.910 } 00:09:06.910 }' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:06.910 BaseBdev2 00:09:06.910 BaseBdev3' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.910 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.168 [2024-12-13 08:20:19.325661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.168 [2024-12-13 08:20:19.325695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.168 [2024-12-13 08:20:19.325785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.168 [2024-12-13 08:20:19.325840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.168 [2024-12-13 08:20:19.325851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63968 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63968 ']' 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63968 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63968 00:09:07.168 killing process with pid 63968 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63968' 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63968 00:09:07.168 [2024-12-13 08:20:19.377239] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.168 08:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63968 00:09:07.427 [2024-12-13 08:20:19.677110] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:08.802 00:09:08.802 real 0m10.864s 00:09:08.802 user 0m17.326s 00:09:08.802 sys 0m1.859s 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.802 ************************************ 00:09:08.802 END TEST raid_state_function_test 00:09:08.802 ************************************ 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.802 08:20:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:08.802 08:20:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:08.802 08:20:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.802 08:20:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.802 ************************************ 00:09:08.802 START TEST raid_state_function_test_sb 00:09:08.802 ************************************ 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64589 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.802 Process raid pid: 64589 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64589' 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64589 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64589 ']' 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.802 08:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.802 [2024-12-13 08:20:21.000167] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:08.802 [2024-12-13 08:20:21.000291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.061 [2024-12-13 08:20:21.175487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.061 [2024-12-13 08:20:21.303946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.318 [2024-12-13 08:20:21.507704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.318 [2024-12-13 08:20:21.507843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.576 [2024-12-13 08:20:21.900392] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.576 [2024-12-13 08:20:21.900679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.576 [2024-12-13 08:20:21.900704] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.576 [2024-12-13 08:20:21.900758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.576 [2024-12-13 08:20:21.900767] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.576 [2024-12-13 08:20:21.900813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.576 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.833 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.833 "name": "Existed_Raid", 00:09:09.833 "uuid": "f4a13200-279f-4487-a4d7-17020021fd60", 00:09:09.833 "strip_size_kb": 64, 00:09:09.833 "state": "configuring", 00:09:09.833 "raid_level": "raid0", 00:09:09.833 "superblock": true, 00:09:09.833 "num_base_bdevs": 3, 00:09:09.833 "num_base_bdevs_discovered": 0, 00:09:09.833 "num_base_bdevs_operational": 3, 00:09:09.833 "base_bdevs_list": [ 00:09:09.833 { 00:09:09.833 "name": "BaseBdev1", 00:09:09.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.833 "is_configured": false, 00:09:09.834 "data_offset": 0, 00:09:09.834 "data_size": 0 00:09:09.834 }, 00:09:09.834 { 00:09:09.834 "name": "BaseBdev2", 00:09:09.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.834 "is_configured": false, 00:09:09.834 "data_offset": 0, 00:09:09.834 "data_size": 0 00:09:09.834 }, 00:09:09.834 { 00:09:09.834 "name": "BaseBdev3", 00:09:09.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.834 "is_configured": false, 00:09:09.834 "data_offset": 0, 00:09:09.834 "data_size": 0 00:09:09.834 } 00:09:09.834 ] 00:09:09.834 }' 00:09:09.834 08:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.834 08:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 [2024-12-13 08:20:22.363571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.092 [2024-12-13 08:20:22.363675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 [2024-12-13 08:20:22.375565] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.092 [2024-12-13 08:20:22.376020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.092 [2024-12-13 08:20:22.376077] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.092 [2024-12-13 08:20:22.376178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.092 [2024-12-13 08:20:22.376216] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.092 [2024-12-13 08:20:22.376280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 [2024-12-13 08:20:22.421870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.092 BaseBdev1 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.092 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 [ 00:09:10.092 { 00:09:10.092 "name": "BaseBdev1", 00:09:10.092 "aliases": [ 00:09:10.092 "841c99b8-3941-48c4-84fc-1dabae3c918c" 00:09:10.092 ], 00:09:10.092 "product_name": "Malloc disk", 00:09:10.092 "block_size": 512, 00:09:10.092 "num_blocks": 65536, 00:09:10.092 "uuid": "841c99b8-3941-48c4-84fc-1dabae3c918c", 00:09:10.092 "assigned_rate_limits": { 00:09:10.092 "rw_ios_per_sec": 0, 00:09:10.092 "rw_mbytes_per_sec": 0, 00:09:10.092 "r_mbytes_per_sec": 0, 00:09:10.092 "w_mbytes_per_sec": 0 00:09:10.092 }, 00:09:10.092 "claimed": true, 00:09:10.092 "claim_type": "exclusive_write", 00:09:10.092 "zoned": false, 00:09:10.092 "supported_io_types": { 00:09:10.092 "read": true, 00:09:10.092 "write": true, 00:09:10.092 "unmap": true, 00:09:10.092 "flush": true, 00:09:10.092 "reset": true, 00:09:10.092 "nvme_admin": false, 00:09:10.092 "nvme_io": false, 00:09:10.092 "nvme_io_md": false, 00:09:10.092 "write_zeroes": true, 00:09:10.092 "zcopy": true, 00:09:10.092 "get_zone_info": false, 00:09:10.092 "zone_management": false, 00:09:10.092 "zone_append": false, 00:09:10.092 "compare": false, 00:09:10.092 "compare_and_write": false, 00:09:10.092 "abort": true, 00:09:10.092 "seek_hole": false, 00:09:10.092 "seek_data": false, 00:09:10.092 "copy": true, 00:09:10.092 "nvme_iov_md": false 00:09:10.092 }, 00:09:10.350 "memory_domains": [ 00:09:10.351 { 00:09:10.351 "dma_device_id": "system", 00:09:10.351 "dma_device_type": 1 00:09:10.351 }, 00:09:10.351 { 00:09:10.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.351 "dma_device_type": 2 00:09:10.351 } 00:09:10.351 ], 00:09:10.351 "driver_specific": {} 00:09:10.351 } 00:09:10.351 ] 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.351 "name": "Existed_Raid", 00:09:10.351 "uuid": "0fb43973-ae5b-47d9-8488-5e6404fc475a", 00:09:10.351 "strip_size_kb": 64, 00:09:10.351 "state": "configuring", 00:09:10.351 "raid_level": "raid0", 00:09:10.351 "superblock": true, 00:09:10.351 "num_base_bdevs": 3, 00:09:10.351 "num_base_bdevs_discovered": 1, 00:09:10.351 "num_base_bdevs_operational": 3, 00:09:10.351 "base_bdevs_list": [ 00:09:10.351 { 00:09:10.351 "name": "BaseBdev1", 00:09:10.351 "uuid": "841c99b8-3941-48c4-84fc-1dabae3c918c", 00:09:10.351 "is_configured": true, 00:09:10.351 "data_offset": 2048, 00:09:10.351 "data_size": 63488 00:09:10.351 }, 00:09:10.351 { 00:09:10.351 "name": "BaseBdev2", 00:09:10.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.351 "is_configured": false, 00:09:10.351 "data_offset": 0, 00:09:10.351 "data_size": 0 00:09:10.351 }, 00:09:10.351 { 00:09:10.351 "name": "BaseBdev3", 00:09:10.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.351 "is_configured": false, 00:09:10.351 "data_offset": 0, 00:09:10.351 "data_size": 0 00:09:10.351 } 00:09:10.351 ] 00:09:10.351 }' 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.351 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.609 [2024-12-13 08:20:22.949052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.609 [2024-12-13 08:20:22.949226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.609 [2024-12-13 08:20:22.961093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.609 [2024-12-13 08:20:22.963082] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.609 [2024-12-13 08:20:22.963278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.609 [2024-12-13 08:20:22.963293] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.609 [2024-12-13 08:20:22.963304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.609 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.867 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.867 08:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.867 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.867 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.867 08:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.867 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.867 "name": "Existed_Raid", 00:09:10.867 "uuid": "3a283525-7690-4be0-8b25-2f05a0aa3485", 00:09:10.867 "strip_size_kb": 64, 00:09:10.867 "state": "configuring", 00:09:10.867 "raid_level": "raid0", 00:09:10.867 "superblock": true, 00:09:10.867 "num_base_bdevs": 3, 00:09:10.867 "num_base_bdevs_discovered": 1, 00:09:10.867 "num_base_bdevs_operational": 3, 00:09:10.867 "base_bdevs_list": [ 00:09:10.867 { 00:09:10.867 "name": "BaseBdev1", 00:09:10.867 "uuid": "841c99b8-3941-48c4-84fc-1dabae3c918c", 00:09:10.867 "is_configured": true, 00:09:10.867 "data_offset": 2048, 00:09:10.867 "data_size": 63488 00:09:10.867 }, 00:09:10.867 { 00:09:10.867 "name": "BaseBdev2", 00:09:10.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.868 "is_configured": false, 00:09:10.868 "data_offset": 0, 00:09:10.868 "data_size": 0 00:09:10.868 }, 00:09:10.868 { 00:09:10.868 "name": "BaseBdev3", 00:09:10.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.868 "is_configured": false, 00:09:10.868 "data_offset": 0, 00:09:10.868 "data_size": 0 00:09:10.868 } 00:09:10.868 ] 00:09:10.868 }' 00:09:10.868 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.868 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.126 [2024-12-13 08:20:23.487583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.126 BaseBdev2 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.126 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.384 [ 00:09:11.384 { 00:09:11.384 "name": "BaseBdev2", 00:09:11.384 "aliases": [ 00:09:11.384 "19c8e6ea-cd40-4e39-91a7-654d8e9a5b06" 00:09:11.384 ], 00:09:11.384 "product_name": "Malloc disk", 00:09:11.384 "block_size": 512, 00:09:11.384 "num_blocks": 65536, 00:09:11.384 "uuid": "19c8e6ea-cd40-4e39-91a7-654d8e9a5b06", 00:09:11.384 "assigned_rate_limits": { 00:09:11.384 "rw_ios_per_sec": 0, 00:09:11.384 "rw_mbytes_per_sec": 0, 00:09:11.384 "r_mbytes_per_sec": 0, 00:09:11.384 "w_mbytes_per_sec": 0 00:09:11.384 }, 00:09:11.384 "claimed": true, 00:09:11.384 "claim_type": "exclusive_write", 00:09:11.384 "zoned": false, 00:09:11.384 "supported_io_types": { 00:09:11.384 "read": true, 00:09:11.384 "write": true, 00:09:11.384 "unmap": true, 00:09:11.384 "flush": true, 00:09:11.384 "reset": true, 00:09:11.384 "nvme_admin": false, 00:09:11.384 "nvme_io": false, 00:09:11.384 "nvme_io_md": false, 00:09:11.384 "write_zeroes": true, 00:09:11.384 "zcopy": true, 00:09:11.384 "get_zone_info": false, 00:09:11.384 "zone_management": false, 00:09:11.384 "zone_append": false, 00:09:11.384 "compare": false, 00:09:11.384 "compare_and_write": false, 00:09:11.384 "abort": true, 00:09:11.384 "seek_hole": false, 00:09:11.384 "seek_data": false, 00:09:11.384 "copy": true, 00:09:11.384 "nvme_iov_md": false 00:09:11.384 }, 00:09:11.384 "memory_domains": [ 00:09:11.384 { 00:09:11.384 "dma_device_id": "system", 00:09:11.384 "dma_device_type": 1 00:09:11.384 }, 00:09:11.384 { 00:09:11.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.384 "dma_device_type": 2 00:09:11.384 } 00:09:11.384 ], 00:09:11.384 "driver_specific": {} 00:09:11.384 } 00:09:11.384 ] 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.384 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.384 "name": "Existed_Raid", 00:09:11.384 "uuid": "3a283525-7690-4be0-8b25-2f05a0aa3485", 00:09:11.384 "strip_size_kb": 64, 00:09:11.384 "state": "configuring", 00:09:11.384 "raid_level": "raid0", 00:09:11.384 "superblock": true, 00:09:11.384 "num_base_bdevs": 3, 00:09:11.384 "num_base_bdevs_discovered": 2, 00:09:11.384 "num_base_bdevs_operational": 3, 00:09:11.384 "base_bdevs_list": [ 00:09:11.384 { 00:09:11.384 "name": "BaseBdev1", 00:09:11.384 "uuid": "841c99b8-3941-48c4-84fc-1dabae3c918c", 00:09:11.384 "is_configured": true, 00:09:11.384 "data_offset": 2048, 00:09:11.384 "data_size": 63488 00:09:11.385 }, 00:09:11.385 { 00:09:11.385 "name": "BaseBdev2", 00:09:11.385 "uuid": "19c8e6ea-cd40-4e39-91a7-654d8e9a5b06", 00:09:11.385 "is_configured": true, 00:09:11.385 "data_offset": 2048, 00:09:11.385 "data_size": 63488 00:09:11.385 }, 00:09:11.385 { 00:09:11.385 "name": "BaseBdev3", 00:09:11.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.385 "is_configured": false, 00:09:11.385 "data_offset": 0, 00:09:11.385 "data_size": 0 00:09:11.385 } 00:09:11.385 ] 00:09:11.385 }' 00:09:11.385 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.385 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.643 08:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.643 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.643 08:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.901 [2024-12-13 08:20:24.047844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.901 [2024-12-13 08:20:24.048258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.901 [2024-12-13 08:20:24.048321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.901 [2024-12-13 08:20:24.048612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:11.901 [2024-12-13 08:20:24.048806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.901 BaseBdev3 00:09:11.901 [2024-12-13 08:20:24.048849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.901 [2024-12-13 08:20:24.049006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.901 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.901 [ 00:09:11.901 { 00:09:11.901 "name": "BaseBdev3", 00:09:11.901 "aliases": [ 00:09:11.901 "68047cc4-2131-4e4e-8a66-56ad802b3c7e" 00:09:11.901 ], 00:09:11.901 "product_name": "Malloc disk", 00:09:11.902 "block_size": 512, 00:09:11.902 "num_blocks": 65536, 00:09:11.902 "uuid": "68047cc4-2131-4e4e-8a66-56ad802b3c7e", 00:09:11.902 "assigned_rate_limits": { 00:09:11.902 "rw_ios_per_sec": 0, 00:09:11.902 "rw_mbytes_per_sec": 0, 00:09:11.902 "r_mbytes_per_sec": 0, 00:09:11.902 "w_mbytes_per_sec": 0 00:09:11.902 }, 00:09:11.902 "claimed": true, 00:09:11.902 "claim_type": "exclusive_write", 00:09:11.902 "zoned": false, 00:09:11.902 "supported_io_types": { 00:09:11.902 "read": true, 00:09:11.902 "write": true, 00:09:11.902 "unmap": true, 00:09:11.902 "flush": true, 00:09:11.902 "reset": true, 00:09:11.902 "nvme_admin": false, 00:09:11.902 "nvme_io": false, 00:09:11.902 "nvme_io_md": false, 00:09:11.902 "write_zeroes": true, 00:09:11.902 "zcopy": true, 00:09:11.902 "get_zone_info": false, 00:09:11.902 "zone_management": false, 00:09:11.902 "zone_append": false, 00:09:11.902 "compare": false, 00:09:11.902 "compare_and_write": false, 00:09:11.902 "abort": true, 00:09:11.902 "seek_hole": false, 00:09:11.902 "seek_data": false, 00:09:11.902 "copy": true, 00:09:11.902 "nvme_iov_md": false 00:09:11.902 }, 00:09:11.902 "memory_domains": [ 00:09:11.902 { 00:09:11.902 "dma_device_id": "system", 00:09:11.902 "dma_device_type": 1 00:09:11.902 }, 00:09:11.902 { 00:09:11.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.902 "dma_device_type": 2 00:09:11.902 } 00:09:11.902 ], 00:09:11.902 "driver_specific": {} 00:09:11.902 } 00:09:11.902 ] 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.902 "name": "Existed_Raid", 00:09:11.902 "uuid": "3a283525-7690-4be0-8b25-2f05a0aa3485", 00:09:11.902 "strip_size_kb": 64, 00:09:11.902 "state": "online", 00:09:11.902 "raid_level": "raid0", 00:09:11.902 "superblock": true, 00:09:11.902 "num_base_bdevs": 3, 00:09:11.902 "num_base_bdevs_discovered": 3, 00:09:11.902 "num_base_bdevs_operational": 3, 00:09:11.902 "base_bdevs_list": [ 00:09:11.902 { 00:09:11.902 "name": "BaseBdev1", 00:09:11.902 "uuid": "841c99b8-3941-48c4-84fc-1dabae3c918c", 00:09:11.902 "is_configured": true, 00:09:11.902 "data_offset": 2048, 00:09:11.902 "data_size": 63488 00:09:11.902 }, 00:09:11.902 { 00:09:11.902 "name": "BaseBdev2", 00:09:11.902 "uuid": "19c8e6ea-cd40-4e39-91a7-654d8e9a5b06", 00:09:11.902 "is_configured": true, 00:09:11.902 "data_offset": 2048, 00:09:11.902 "data_size": 63488 00:09:11.902 }, 00:09:11.902 { 00:09:11.902 "name": "BaseBdev3", 00:09:11.902 "uuid": "68047cc4-2131-4e4e-8a66-56ad802b3c7e", 00:09:11.902 "is_configured": true, 00:09:11.902 "data_offset": 2048, 00:09:11.902 "data_size": 63488 00:09:11.902 } 00:09:11.902 ] 00:09:11.902 }' 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.902 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.467 [2024-12-13 08:20:24.559415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.467 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.467 "name": "Existed_Raid", 00:09:12.467 "aliases": [ 00:09:12.467 "3a283525-7690-4be0-8b25-2f05a0aa3485" 00:09:12.467 ], 00:09:12.467 "product_name": "Raid Volume", 00:09:12.467 "block_size": 512, 00:09:12.467 "num_blocks": 190464, 00:09:12.467 "uuid": "3a283525-7690-4be0-8b25-2f05a0aa3485", 00:09:12.467 "assigned_rate_limits": { 00:09:12.467 "rw_ios_per_sec": 0, 00:09:12.467 "rw_mbytes_per_sec": 0, 00:09:12.467 "r_mbytes_per_sec": 0, 00:09:12.467 "w_mbytes_per_sec": 0 00:09:12.467 }, 00:09:12.467 "claimed": false, 00:09:12.467 "zoned": false, 00:09:12.467 "supported_io_types": { 00:09:12.467 "read": true, 00:09:12.467 "write": true, 00:09:12.467 "unmap": true, 00:09:12.467 "flush": true, 00:09:12.467 "reset": true, 00:09:12.467 "nvme_admin": false, 00:09:12.467 "nvme_io": false, 00:09:12.467 "nvme_io_md": false, 00:09:12.467 "write_zeroes": true, 00:09:12.467 "zcopy": false, 00:09:12.467 "get_zone_info": false, 00:09:12.467 "zone_management": false, 00:09:12.467 "zone_append": false, 00:09:12.467 "compare": false, 00:09:12.467 "compare_and_write": false, 00:09:12.467 "abort": false, 00:09:12.467 "seek_hole": false, 00:09:12.467 "seek_data": false, 00:09:12.467 "copy": false, 00:09:12.467 "nvme_iov_md": false 00:09:12.467 }, 00:09:12.467 "memory_domains": [ 00:09:12.467 { 00:09:12.467 "dma_device_id": "system", 00:09:12.467 "dma_device_type": 1 00:09:12.467 }, 00:09:12.467 { 00:09:12.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.467 "dma_device_type": 2 00:09:12.467 }, 00:09:12.467 { 00:09:12.467 "dma_device_id": "system", 00:09:12.467 "dma_device_type": 1 00:09:12.467 }, 00:09:12.467 { 00:09:12.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.467 "dma_device_type": 2 00:09:12.467 }, 00:09:12.467 { 00:09:12.467 "dma_device_id": "system", 00:09:12.467 "dma_device_type": 1 00:09:12.467 }, 00:09:12.467 { 00:09:12.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.467 "dma_device_type": 2 00:09:12.467 } 00:09:12.467 ], 00:09:12.467 "driver_specific": { 00:09:12.467 "raid": { 00:09:12.467 "uuid": "3a283525-7690-4be0-8b25-2f05a0aa3485", 00:09:12.467 "strip_size_kb": 64, 00:09:12.467 "state": "online", 00:09:12.467 "raid_level": "raid0", 00:09:12.467 "superblock": true, 00:09:12.467 "num_base_bdevs": 3, 00:09:12.467 "num_base_bdevs_discovered": 3, 00:09:12.467 "num_base_bdevs_operational": 3, 00:09:12.467 "base_bdevs_list": [ 00:09:12.467 { 00:09:12.467 "name": "BaseBdev1", 00:09:12.467 "uuid": "841c99b8-3941-48c4-84fc-1dabae3c918c", 00:09:12.467 "is_configured": true, 00:09:12.467 "data_offset": 2048, 00:09:12.467 "data_size": 63488 00:09:12.467 }, 00:09:12.467 { 00:09:12.467 "name": "BaseBdev2", 00:09:12.468 "uuid": "19c8e6ea-cd40-4e39-91a7-654d8e9a5b06", 00:09:12.468 "is_configured": true, 00:09:12.468 "data_offset": 2048, 00:09:12.468 "data_size": 63488 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "name": "BaseBdev3", 00:09:12.468 "uuid": "68047cc4-2131-4e4e-8a66-56ad802b3c7e", 00:09:12.468 "is_configured": true, 00:09:12.468 "data_offset": 2048, 00:09:12.468 "data_size": 63488 00:09:12.468 } 00:09:12.468 ] 00:09:12.468 } 00:09:12.468 } 00:09:12.468 }' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:12.468 BaseBdev2 00:09:12.468 BaseBdev3' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.468 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.728 [2024-12-13 08:20:24.834690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.728 [2024-12-13 08:20:24.834800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.728 [2024-12-13 08:20:24.834896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.728 "name": "Existed_Raid", 00:09:12.728 "uuid": "3a283525-7690-4be0-8b25-2f05a0aa3485", 00:09:12.728 "strip_size_kb": 64, 00:09:12.728 "state": "offline", 00:09:12.728 "raid_level": "raid0", 00:09:12.728 "superblock": true, 00:09:12.728 "num_base_bdevs": 3, 00:09:12.728 "num_base_bdevs_discovered": 2, 00:09:12.728 "num_base_bdevs_operational": 2, 00:09:12.728 "base_bdevs_list": [ 00:09:12.728 { 00:09:12.728 "name": null, 00:09:12.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.728 "is_configured": false, 00:09:12.728 "data_offset": 0, 00:09:12.728 "data_size": 63488 00:09:12.728 }, 00:09:12.728 { 00:09:12.728 "name": "BaseBdev2", 00:09:12.728 "uuid": "19c8e6ea-cd40-4e39-91a7-654d8e9a5b06", 00:09:12.728 "is_configured": true, 00:09:12.728 "data_offset": 2048, 00:09:12.728 "data_size": 63488 00:09:12.728 }, 00:09:12.728 { 00:09:12.728 "name": "BaseBdev3", 00:09:12.728 "uuid": "68047cc4-2131-4e4e-8a66-56ad802b3c7e", 00:09:12.728 "is_configured": true, 00:09:12.728 "data_offset": 2048, 00:09:12.728 "data_size": 63488 00:09:12.728 } 00:09:12.728 ] 00:09:12.728 }' 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.728 08:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.295 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:13.295 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.295 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.295 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:13.295 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.295 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.295 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.296 [2024-12-13 08:20:25.492148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.296 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.296 [2024-12-13 08:20:25.649319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.296 [2024-12-13 08:20:25.649377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.576 BaseBdev2 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.576 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.577 [ 00:09:13.577 { 00:09:13.577 "name": "BaseBdev2", 00:09:13.577 "aliases": [ 00:09:13.577 "5a8efc5f-c525-4f46-bf8c-9db98906d78c" 00:09:13.577 ], 00:09:13.577 "product_name": "Malloc disk", 00:09:13.577 "block_size": 512, 00:09:13.577 "num_blocks": 65536, 00:09:13.577 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:13.577 "assigned_rate_limits": { 00:09:13.577 "rw_ios_per_sec": 0, 00:09:13.577 "rw_mbytes_per_sec": 0, 00:09:13.577 "r_mbytes_per_sec": 0, 00:09:13.577 "w_mbytes_per_sec": 0 00:09:13.577 }, 00:09:13.577 "claimed": false, 00:09:13.577 "zoned": false, 00:09:13.577 "supported_io_types": { 00:09:13.577 "read": true, 00:09:13.577 "write": true, 00:09:13.577 "unmap": true, 00:09:13.577 "flush": true, 00:09:13.577 "reset": true, 00:09:13.577 "nvme_admin": false, 00:09:13.577 "nvme_io": false, 00:09:13.577 "nvme_io_md": false, 00:09:13.577 "write_zeroes": true, 00:09:13.577 "zcopy": true, 00:09:13.577 "get_zone_info": false, 00:09:13.577 "zone_management": false, 00:09:13.577 "zone_append": false, 00:09:13.577 "compare": false, 00:09:13.577 "compare_and_write": false, 00:09:13.577 "abort": true, 00:09:13.577 "seek_hole": false, 00:09:13.577 "seek_data": false, 00:09:13.577 "copy": true, 00:09:13.577 "nvme_iov_md": false 00:09:13.577 }, 00:09:13.577 "memory_domains": [ 00:09:13.577 { 00:09:13.577 "dma_device_id": "system", 00:09:13.577 "dma_device_type": 1 00:09:13.577 }, 00:09:13.577 { 00:09:13.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.577 "dma_device_type": 2 00:09:13.577 } 00:09:13.577 ], 00:09:13.577 "driver_specific": {} 00:09:13.577 } 00:09:13.577 ] 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.577 BaseBdev3 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.577 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.836 [ 00:09:13.836 { 00:09:13.836 "name": "BaseBdev3", 00:09:13.836 "aliases": [ 00:09:13.836 "c82bd441-edc1-482c-8846-eda2eba1d088" 00:09:13.836 ], 00:09:13.836 "product_name": "Malloc disk", 00:09:13.836 "block_size": 512, 00:09:13.836 "num_blocks": 65536, 00:09:13.836 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:13.836 "assigned_rate_limits": { 00:09:13.836 "rw_ios_per_sec": 0, 00:09:13.836 "rw_mbytes_per_sec": 0, 00:09:13.836 "r_mbytes_per_sec": 0, 00:09:13.836 "w_mbytes_per_sec": 0 00:09:13.836 }, 00:09:13.836 "claimed": false, 00:09:13.836 "zoned": false, 00:09:13.836 "supported_io_types": { 00:09:13.836 "read": true, 00:09:13.836 "write": true, 00:09:13.836 "unmap": true, 00:09:13.836 "flush": true, 00:09:13.836 "reset": true, 00:09:13.836 "nvme_admin": false, 00:09:13.836 "nvme_io": false, 00:09:13.836 "nvme_io_md": false, 00:09:13.836 "write_zeroes": true, 00:09:13.836 "zcopy": true, 00:09:13.836 "get_zone_info": false, 00:09:13.836 "zone_management": false, 00:09:13.836 "zone_append": false, 00:09:13.836 "compare": false, 00:09:13.836 "compare_and_write": false, 00:09:13.836 "abort": true, 00:09:13.836 "seek_hole": false, 00:09:13.836 "seek_data": false, 00:09:13.836 "copy": true, 00:09:13.836 "nvme_iov_md": false 00:09:13.836 }, 00:09:13.836 "memory_domains": [ 00:09:13.836 { 00:09:13.836 "dma_device_id": "system", 00:09:13.836 "dma_device_type": 1 00:09:13.836 }, 00:09:13.836 { 00:09:13.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.836 "dma_device_type": 2 00:09:13.836 } 00:09:13.836 ], 00:09:13.836 "driver_specific": {} 00:09:13.836 } 00:09:13.836 ] 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.836 [2024-12-13 08:20:25.970544] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.836 [2024-12-13 08:20:25.970680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.836 [2024-12-13 08:20:25.970735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.836 [2024-12-13 08:20:25.972716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.836 08:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.836 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.836 "name": "Existed_Raid", 00:09:13.836 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:13.836 "strip_size_kb": 64, 00:09:13.836 "state": "configuring", 00:09:13.836 "raid_level": "raid0", 00:09:13.836 "superblock": true, 00:09:13.836 "num_base_bdevs": 3, 00:09:13.836 "num_base_bdevs_discovered": 2, 00:09:13.836 "num_base_bdevs_operational": 3, 00:09:13.836 "base_bdevs_list": [ 00:09:13.836 { 00:09:13.836 "name": "BaseBdev1", 00:09:13.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.836 "is_configured": false, 00:09:13.836 "data_offset": 0, 00:09:13.836 "data_size": 0 00:09:13.836 }, 00:09:13.836 { 00:09:13.836 "name": "BaseBdev2", 00:09:13.836 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:13.836 "is_configured": true, 00:09:13.836 "data_offset": 2048, 00:09:13.836 "data_size": 63488 00:09:13.836 }, 00:09:13.836 { 00:09:13.836 "name": "BaseBdev3", 00:09:13.836 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:13.836 "is_configured": true, 00:09:13.836 "data_offset": 2048, 00:09:13.836 "data_size": 63488 00:09:13.836 } 00:09:13.836 ] 00:09:13.837 }' 00:09:13.837 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.837 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.096 [2024-12-13 08:20:26.417801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.096 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.354 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.355 "name": "Existed_Raid", 00:09:14.355 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:14.355 "strip_size_kb": 64, 00:09:14.355 "state": "configuring", 00:09:14.355 "raid_level": "raid0", 00:09:14.355 "superblock": true, 00:09:14.355 "num_base_bdevs": 3, 00:09:14.355 "num_base_bdevs_discovered": 1, 00:09:14.355 "num_base_bdevs_operational": 3, 00:09:14.355 "base_bdevs_list": [ 00:09:14.355 { 00:09:14.355 "name": "BaseBdev1", 00:09:14.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.355 "is_configured": false, 00:09:14.355 "data_offset": 0, 00:09:14.355 "data_size": 0 00:09:14.355 }, 00:09:14.355 { 00:09:14.355 "name": null, 00:09:14.355 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:14.355 "is_configured": false, 00:09:14.355 "data_offset": 0, 00:09:14.355 "data_size": 63488 00:09:14.355 }, 00:09:14.355 { 00:09:14.355 "name": "BaseBdev3", 00:09:14.355 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:14.355 "is_configured": true, 00:09:14.355 "data_offset": 2048, 00:09:14.355 "data_size": 63488 00:09:14.355 } 00:09:14.355 ] 00:09:14.355 }' 00:09:14.355 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.355 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.614 [2024-12-13 08:20:26.927593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.614 BaseBdev1 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.614 [ 00:09:14.614 { 00:09:14.614 "name": "BaseBdev1", 00:09:14.614 "aliases": [ 00:09:14.614 "d0e9333a-5777-44c6-a0d4-8795f74e281e" 00:09:14.614 ], 00:09:14.614 "product_name": "Malloc disk", 00:09:14.614 "block_size": 512, 00:09:14.614 "num_blocks": 65536, 00:09:14.614 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:14.614 "assigned_rate_limits": { 00:09:14.614 "rw_ios_per_sec": 0, 00:09:14.614 "rw_mbytes_per_sec": 0, 00:09:14.614 "r_mbytes_per_sec": 0, 00:09:14.614 "w_mbytes_per_sec": 0 00:09:14.614 }, 00:09:14.614 "claimed": true, 00:09:14.614 "claim_type": "exclusive_write", 00:09:14.614 "zoned": false, 00:09:14.614 "supported_io_types": { 00:09:14.614 "read": true, 00:09:14.614 "write": true, 00:09:14.614 "unmap": true, 00:09:14.614 "flush": true, 00:09:14.614 "reset": true, 00:09:14.614 "nvme_admin": false, 00:09:14.614 "nvme_io": false, 00:09:14.614 "nvme_io_md": false, 00:09:14.614 "write_zeroes": true, 00:09:14.614 "zcopy": true, 00:09:14.614 "get_zone_info": false, 00:09:14.614 "zone_management": false, 00:09:14.614 "zone_append": false, 00:09:14.614 "compare": false, 00:09:14.614 "compare_and_write": false, 00:09:14.614 "abort": true, 00:09:14.614 "seek_hole": false, 00:09:14.614 "seek_data": false, 00:09:14.614 "copy": true, 00:09:14.614 "nvme_iov_md": false 00:09:14.614 }, 00:09:14.614 "memory_domains": [ 00:09:14.614 { 00:09:14.614 "dma_device_id": "system", 00:09:14.614 "dma_device_type": 1 00:09:14.614 }, 00:09:14.614 { 00:09:14.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.614 "dma_device_type": 2 00:09:14.614 } 00:09:14.614 ], 00:09:14.614 "driver_specific": {} 00:09:14.614 } 00:09:14.614 ] 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.614 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.873 08:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.873 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.873 "name": "Existed_Raid", 00:09:14.873 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:14.873 "strip_size_kb": 64, 00:09:14.873 "state": "configuring", 00:09:14.873 "raid_level": "raid0", 00:09:14.873 "superblock": true, 00:09:14.873 "num_base_bdevs": 3, 00:09:14.873 "num_base_bdevs_discovered": 2, 00:09:14.873 "num_base_bdevs_operational": 3, 00:09:14.873 "base_bdevs_list": [ 00:09:14.873 { 00:09:14.873 "name": "BaseBdev1", 00:09:14.874 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:14.874 "is_configured": true, 00:09:14.874 "data_offset": 2048, 00:09:14.874 "data_size": 63488 00:09:14.874 }, 00:09:14.874 { 00:09:14.874 "name": null, 00:09:14.874 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:14.874 "is_configured": false, 00:09:14.874 "data_offset": 0, 00:09:14.874 "data_size": 63488 00:09:14.874 }, 00:09:14.874 { 00:09:14.874 "name": "BaseBdev3", 00:09:14.874 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:14.874 "is_configured": true, 00:09:14.874 "data_offset": 2048, 00:09:14.874 "data_size": 63488 00:09:14.874 } 00:09:14.874 ] 00:09:14.874 }' 00:09:14.874 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.874 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.132 [2024-12-13 08:20:27.458966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.132 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.391 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.391 "name": "Existed_Raid", 00:09:15.391 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:15.391 "strip_size_kb": 64, 00:09:15.391 "state": "configuring", 00:09:15.391 "raid_level": "raid0", 00:09:15.391 "superblock": true, 00:09:15.391 "num_base_bdevs": 3, 00:09:15.391 "num_base_bdevs_discovered": 1, 00:09:15.391 "num_base_bdevs_operational": 3, 00:09:15.391 "base_bdevs_list": [ 00:09:15.391 { 00:09:15.391 "name": "BaseBdev1", 00:09:15.391 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:15.391 "is_configured": true, 00:09:15.391 "data_offset": 2048, 00:09:15.391 "data_size": 63488 00:09:15.391 }, 00:09:15.391 { 00:09:15.391 "name": null, 00:09:15.391 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:15.391 "is_configured": false, 00:09:15.391 "data_offset": 0, 00:09:15.391 "data_size": 63488 00:09:15.391 }, 00:09:15.391 { 00:09:15.391 "name": null, 00:09:15.391 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:15.391 "is_configured": false, 00:09:15.391 "data_offset": 0, 00:09:15.391 "data_size": 63488 00:09:15.391 } 00:09:15.391 ] 00:09:15.391 }' 00:09:15.391 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.391 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.650 08:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.650 [2024-12-13 08:20:28.006082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.650 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.909 "name": "Existed_Raid", 00:09:15.909 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:15.909 "strip_size_kb": 64, 00:09:15.909 "state": "configuring", 00:09:15.909 "raid_level": "raid0", 00:09:15.909 "superblock": true, 00:09:15.909 "num_base_bdevs": 3, 00:09:15.909 "num_base_bdevs_discovered": 2, 00:09:15.909 "num_base_bdevs_operational": 3, 00:09:15.909 "base_bdevs_list": [ 00:09:15.909 { 00:09:15.909 "name": "BaseBdev1", 00:09:15.909 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:15.909 "is_configured": true, 00:09:15.909 "data_offset": 2048, 00:09:15.909 "data_size": 63488 00:09:15.909 }, 00:09:15.909 { 00:09:15.909 "name": null, 00:09:15.909 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:15.909 "is_configured": false, 00:09:15.909 "data_offset": 0, 00:09:15.909 "data_size": 63488 00:09:15.909 }, 00:09:15.909 { 00:09:15.909 "name": "BaseBdev3", 00:09:15.909 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:15.909 "is_configured": true, 00:09:15.909 "data_offset": 2048, 00:09:15.909 "data_size": 63488 00:09:15.909 } 00:09:15.909 ] 00:09:15.909 }' 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.909 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.169 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 [2024-12-13 08:20:28.533212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.432 "name": "Existed_Raid", 00:09:16.432 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:16.432 "strip_size_kb": 64, 00:09:16.432 "state": "configuring", 00:09:16.432 "raid_level": "raid0", 00:09:16.432 "superblock": true, 00:09:16.432 "num_base_bdevs": 3, 00:09:16.432 "num_base_bdevs_discovered": 1, 00:09:16.432 "num_base_bdevs_operational": 3, 00:09:16.432 "base_bdevs_list": [ 00:09:16.432 { 00:09:16.432 "name": null, 00:09:16.432 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:16.432 "is_configured": false, 00:09:16.432 "data_offset": 0, 00:09:16.432 "data_size": 63488 00:09:16.432 }, 00:09:16.432 { 00:09:16.432 "name": null, 00:09:16.432 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:16.432 "is_configured": false, 00:09:16.432 "data_offset": 0, 00:09:16.432 "data_size": 63488 00:09:16.432 }, 00:09:16.432 { 00:09:16.432 "name": "BaseBdev3", 00:09:16.432 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:16.432 "is_configured": true, 00:09:16.432 "data_offset": 2048, 00:09:16.432 "data_size": 63488 00:09:16.432 } 00:09:16.432 ] 00:09:16.432 }' 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.432 08:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.001 [2024-12-13 08:20:29.160308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.001 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.002 "name": "Existed_Raid", 00:09:17.002 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:17.002 "strip_size_kb": 64, 00:09:17.002 "state": "configuring", 00:09:17.002 "raid_level": "raid0", 00:09:17.002 "superblock": true, 00:09:17.002 "num_base_bdevs": 3, 00:09:17.002 "num_base_bdevs_discovered": 2, 00:09:17.002 "num_base_bdevs_operational": 3, 00:09:17.002 "base_bdevs_list": [ 00:09:17.002 { 00:09:17.002 "name": null, 00:09:17.002 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:17.002 "is_configured": false, 00:09:17.002 "data_offset": 0, 00:09:17.002 "data_size": 63488 00:09:17.002 }, 00:09:17.002 { 00:09:17.002 "name": "BaseBdev2", 00:09:17.002 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:17.002 "is_configured": true, 00:09:17.002 "data_offset": 2048, 00:09:17.002 "data_size": 63488 00:09:17.002 }, 00:09:17.002 { 00:09:17.002 "name": "BaseBdev3", 00:09:17.002 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:17.002 "is_configured": true, 00:09:17.002 "data_offset": 2048, 00:09:17.002 "data_size": 63488 00:09:17.002 } 00:09:17.002 ] 00:09:17.002 }' 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.002 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.261 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.261 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.261 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.261 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:17.261 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d0e9333a-5777-44c6-a0d4-8795f74e281e 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.521 [2024-12-13 08:20:29.730428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:17.521 [2024-12-13 08:20:29.730802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:17.521 [2024-12-13 08:20:29.730861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.521 [2024-12-13 08:20:29.731189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:17.521 NewBaseBdev 00:09:17.521 [2024-12-13 08:20:29.731417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:17.521 [2024-12-13 08:20:29.731431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:17.521 [2024-12-13 08:20:29.731610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.521 [ 00:09:17.521 { 00:09:17.521 "name": "NewBaseBdev", 00:09:17.521 "aliases": [ 00:09:17.521 "d0e9333a-5777-44c6-a0d4-8795f74e281e" 00:09:17.521 ], 00:09:17.521 "product_name": "Malloc disk", 00:09:17.521 "block_size": 512, 00:09:17.521 "num_blocks": 65536, 00:09:17.521 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:17.521 "assigned_rate_limits": { 00:09:17.521 "rw_ios_per_sec": 0, 00:09:17.521 "rw_mbytes_per_sec": 0, 00:09:17.521 "r_mbytes_per_sec": 0, 00:09:17.521 "w_mbytes_per_sec": 0 00:09:17.521 }, 00:09:17.521 "claimed": true, 00:09:17.521 "claim_type": "exclusive_write", 00:09:17.521 "zoned": false, 00:09:17.521 "supported_io_types": { 00:09:17.521 "read": true, 00:09:17.521 "write": true, 00:09:17.521 "unmap": true, 00:09:17.521 "flush": true, 00:09:17.521 "reset": true, 00:09:17.521 "nvme_admin": false, 00:09:17.521 "nvme_io": false, 00:09:17.521 "nvme_io_md": false, 00:09:17.521 "write_zeroes": true, 00:09:17.521 "zcopy": true, 00:09:17.521 "get_zone_info": false, 00:09:17.521 "zone_management": false, 00:09:17.521 "zone_append": false, 00:09:17.521 "compare": false, 00:09:17.521 "compare_and_write": false, 00:09:17.521 "abort": true, 00:09:17.521 "seek_hole": false, 00:09:17.521 "seek_data": false, 00:09:17.521 "copy": true, 00:09:17.521 "nvme_iov_md": false 00:09:17.521 }, 00:09:17.521 "memory_domains": [ 00:09:17.521 { 00:09:17.521 "dma_device_id": "system", 00:09:17.521 "dma_device_type": 1 00:09:17.521 }, 00:09:17.521 { 00:09:17.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.521 "dma_device_type": 2 00:09:17.521 } 00:09:17.521 ], 00:09:17.521 "driver_specific": {} 00:09:17.521 } 00:09:17.521 ] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.521 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.521 "name": "Existed_Raid", 00:09:17.521 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:17.521 "strip_size_kb": 64, 00:09:17.521 "state": "online", 00:09:17.521 "raid_level": "raid0", 00:09:17.521 "superblock": true, 00:09:17.521 "num_base_bdevs": 3, 00:09:17.521 "num_base_bdevs_discovered": 3, 00:09:17.521 "num_base_bdevs_operational": 3, 00:09:17.521 "base_bdevs_list": [ 00:09:17.521 { 00:09:17.521 "name": "NewBaseBdev", 00:09:17.521 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:17.521 "is_configured": true, 00:09:17.521 "data_offset": 2048, 00:09:17.521 "data_size": 63488 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "name": "BaseBdev2", 00:09:17.522 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:17.522 "is_configured": true, 00:09:17.522 "data_offset": 2048, 00:09:17.522 "data_size": 63488 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "name": "BaseBdev3", 00:09:17.522 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:17.522 "is_configured": true, 00:09:17.522 "data_offset": 2048, 00:09:17.522 "data_size": 63488 00:09:17.522 } 00:09:17.522 ] 00:09:17.522 }' 00:09:17.522 08:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.522 08:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.091 [2024-12-13 08:20:30.237938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.091 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.091 "name": "Existed_Raid", 00:09:18.091 "aliases": [ 00:09:18.091 "58b0af2b-9871-4ebb-a727-2e7570b5875c" 00:09:18.091 ], 00:09:18.091 "product_name": "Raid Volume", 00:09:18.091 "block_size": 512, 00:09:18.091 "num_blocks": 190464, 00:09:18.091 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:18.091 "assigned_rate_limits": { 00:09:18.091 "rw_ios_per_sec": 0, 00:09:18.091 "rw_mbytes_per_sec": 0, 00:09:18.091 "r_mbytes_per_sec": 0, 00:09:18.091 "w_mbytes_per_sec": 0 00:09:18.091 }, 00:09:18.091 "claimed": false, 00:09:18.091 "zoned": false, 00:09:18.091 "supported_io_types": { 00:09:18.091 "read": true, 00:09:18.091 "write": true, 00:09:18.091 "unmap": true, 00:09:18.091 "flush": true, 00:09:18.091 "reset": true, 00:09:18.091 "nvme_admin": false, 00:09:18.091 "nvme_io": false, 00:09:18.091 "nvme_io_md": false, 00:09:18.091 "write_zeroes": true, 00:09:18.091 "zcopy": false, 00:09:18.091 "get_zone_info": false, 00:09:18.091 "zone_management": false, 00:09:18.091 "zone_append": false, 00:09:18.091 "compare": false, 00:09:18.091 "compare_and_write": false, 00:09:18.091 "abort": false, 00:09:18.091 "seek_hole": false, 00:09:18.091 "seek_data": false, 00:09:18.091 "copy": false, 00:09:18.091 "nvme_iov_md": false 00:09:18.091 }, 00:09:18.091 "memory_domains": [ 00:09:18.092 { 00:09:18.092 "dma_device_id": "system", 00:09:18.092 "dma_device_type": 1 00:09:18.092 }, 00:09:18.092 { 00:09:18.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.092 "dma_device_type": 2 00:09:18.092 }, 00:09:18.092 { 00:09:18.092 "dma_device_id": "system", 00:09:18.092 "dma_device_type": 1 00:09:18.092 }, 00:09:18.092 { 00:09:18.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.092 "dma_device_type": 2 00:09:18.092 }, 00:09:18.092 { 00:09:18.092 "dma_device_id": "system", 00:09:18.092 "dma_device_type": 1 00:09:18.092 }, 00:09:18.092 { 00:09:18.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.092 "dma_device_type": 2 00:09:18.092 } 00:09:18.092 ], 00:09:18.092 "driver_specific": { 00:09:18.092 "raid": { 00:09:18.092 "uuid": "58b0af2b-9871-4ebb-a727-2e7570b5875c", 00:09:18.092 "strip_size_kb": 64, 00:09:18.092 "state": "online", 00:09:18.092 "raid_level": "raid0", 00:09:18.092 "superblock": true, 00:09:18.092 "num_base_bdevs": 3, 00:09:18.092 "num_base_bdevs_discovered": 3, 00:09:18.092 "num_base_bdevs_operational": 3, 00:09:18.092 "base_bdevs_list": [ 00:09:18.092 { 00:09:18.092 "name": "NewBaseBdev", 00:09:18.092 "uuid": "d0e9333a-5777-44c6-a0d4-8795f74e281e", 00:09:18.092 "is_configured": true, 00:09:18.092 "data_offset": 2048, 00:09:18.092 "data_size": 63488 00:09:18.092 }, 00:09:18.092 { 00:09:18.092 "name": "BaseBdev2", 00:09:18.092 "uuid": "5a8efc5f-c525-4f46-bf8c-9db98906d78c", 00:09:18.092 "is_configured": true, 00:09:18.092 "data_offset": 2048, 00:09:18.092 "data_size": 63488 00:09:18.092 }, 00:09:18.092 { 00:09:18.092 "name": "BaseBdev3", 00:09:18.092 "uuid": "c82bd441-edc1-482c-8846-eda2eba1d088", 00:09:18.092 "is_configured": true, 00:09:18.092 "data_offset": 2048, 00:09:18.092 "data_size": 63488 00:09:18.092 } 00:09:18.092 ] 00:09:18.092 } 00:09:18.092 } 00:09:18.092 }' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:18.092 BaseBdev2 00:09:18.092 BaseBdev3' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.092 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.351 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.351 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.351 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.352 [2024-12-13 08:20:30.525167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.352 [2024-12-13 08:20:30.525201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.352 [2024-12-13 08:20:30.525297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.352 [2024-12-13 08:20:30.525351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.352 [2024-12-13 08:20:30.525363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64589 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64589 ']' 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64589 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64589 00:09:18.352 killing process with pid 64589 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64589' 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64589 00:09:18.352 [2024-12-13 08:20:30.576841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:18.352 08:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64589 00:09:18.611 [2024-12-13 08:20:30.885893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.989 08:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:19.989 00:09:19.989 real 0m11.175s 00:09:19.989 user 0m17.841s 00:09:19.989 sys 0m1.956s 00:09:19.989 08:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.989 08:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.989 ************************************ 00:09:19.989 END TEST raid_state_function_test_sb 00:09:19.989 ************************************ 00:09:19.989 08:20:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:19.989 08:20:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.989 08:20:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.989 08:20:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.989 ************************************ 00:09:19.989 START TEST raid_superblock_test 00:09:19.989 ************************************ 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:19.989 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65220 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65220 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65220 ']' 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.990 08:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.990 [2024-12-13 08:20:32.232178] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:19.990 [2024-12-13 08:20:32.232335] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65220 ] 00:09:20.248 [2024-12-13 08:20:32.408672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.248 [2024-12-13 08:20:32.531104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.507 [2024-12-13 08:20:32.744111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.507 [2024-12-13 08:20:32.744258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.766 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.025 malloc1 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.025 [2024-12-13 08:20:33.165773] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:21.025 [2024-12-13 08:20:33.165905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.025 [2024-12-13 08:20:33.165947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:21.025 [2024-12-13 08:20:33.165976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.025 [2024-12-13 08:20:33.168340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.025 [2024-12-13 08:20:33.168420] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:21.025 pt1 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:21.025 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.026 malloc2 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.026 [2024-12-13 08:20:33.222176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.026 [2024-12-13 08:20:33.222239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.026 [2024-12-13 08:20:33.222292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:21.026 [2024-12-13 08:20:33.222301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.026 [2024-12-13 08:20:33.224545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.026 [2024-12-13 08:20:33.224581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.026 pt2 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.026 malloc3 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.026 [2024-12-13 08:20:33.286713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:21.026 [2024-12-13 08:20:33.286816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.026 [2024-12-13 08:20:33.286871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:21.026 [2024-12-13 08:20:33.286900] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.026 [2024-12-13 08:20:33.289079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.026 [2024-12-13 08:20:33.289183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:21.026 pt3 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.026 [2024-12-13 08:20:33.298746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:21.026 [2024-12-13 08:20:33.300771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.026 [2024-12-13 08:20:33.300886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:21.026 [2024-12-13 08:20:33.301079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:21.026 [2024-12-13 08:20:33.301174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.026 [2024-12-13 08:20:33.301468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.026 [2024-12-13 08:20:33.301679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:21.026 [2024-12-13 08:20:33.301719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:21.026 [2024-12-13 08:20:33.301934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.026 "name": "raid_bdev1", 00:09:21.026 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:21.026 "strip_size_kb": 64, 00:09:21.026 "state": "online", 00:09:21.026 "raid_level": "raid0", 00:09:21.026 "superblock": true, 00:09:21.026 "num_base_bdevs": 3, 00:09:21.026 "num_base_bdevs_discovered": 3, 00:09:21.026 "num_base_bdevs_operational": 3, 00:09:21.026 "base_bdevs_list": [ 00:09:21.026 { 00:09:21.026 "name": "pt1", 00:09:21.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.026 "is_configured": true, 00:09:21.026 "data_offset": 2048, 00:09:21.026 "data_size": 63488 00:09:21.026 }, 00:09:21.026 { 00:09:21.026 "name": "pt2", 00:09:21.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.026 "is_configured": true, 00:09:21.026 "data_offset": 2048, 00:09:21.026 "data_size": 63488 00:09:21.026 }, 00:09:21.026 { 00:09:21.026 "name": "pt3", 00:09:21.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.026 "is_configured": true, 00:09:21.026 "data_offset": 2048, 00:09:21.026 "data_size": 63488 00:09:21.026 } 00:09:21.026 ] 00:09:21.026 }' 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.026 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.595 [2024-12-13 08:20:33.710312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.595 "name": "raid_bdev1", 00:09:21.595 "aliases": [ 00:09:21.595 "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26" 00:09:21.595 ], 00:09:21.595 "product_name": "Raid Volume", 00:09:21.595 "block_size": 512, 00:09:21.595 "num_blocks": 190464, 00:09:21.595 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:21.595 "assigned_rate_limits": { 00:09:21.595 "rw_ios_per_sec": 0, 00:09:21.595 "rw_mbytes_per_sec": 0, 00:09:21.595 "r_mbytes_per_sec": 0, 00:09:21.595 "w_mbytes_per_sec": 0 00:09:21.595 }, 00:09:21.595 "claimed": false, 00:09:21.595 "zoned": false, 00:09:21.595 "supported_io_types": { 00:09:21.595 "read": true, 00:09:21.595 "write": true, 00:09:21.595 "unmap": true, 00:09:21.595 "flush": true, 00:09:21.595 "reset": true, 00:09:21.595 "nvme_admin": false, 00:09:21.595 "nvme_io": false, 00:09:21.595 "nvme_io_md": false, 00:09:21.595 "write_zeroes": true, 00:09:21.595 "zcopy": false, 00:09:21.595 "get_zone_info": false, 00:09:21.595 "zone_management": false, 00:09:21.595 "zone_append": false, 00:09:21.595 "compare": false, 00:09:21.595 "compare_and_write": false, 00:09:21.595 "abort": false, 00:09:21.595 "seek_hole": false, 00:09:21.595 "seek_data": false, 00:09:21.595 "copy": false, 00:09:21.595 "nvme_iov_md": false 00:09:21.595 }, 00:09:21.595 "memory_domains": [ 00:09:21.595 { 00:09:21.595 "dma_device_id": "system", 00:09:21.595 "dma_device_type": 1 00:09:21.595 }, 00:09:21.595 { 00:09:21.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.595 "dma_device_type": 2 00:09:21.595 }, 00:09:21.595 { 00:09:21.595 "dma_device_id": "system", 00:09:21.595 "dma_device_type": 1 00:09:21.595 }, 00:09:21.595 { 00:09:21.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.595 "dma_device_type": 2 00:09:21.595 }, 00:09:21.595 { 00:09:21.595 "dma_device_id": "system", 00:09:21.595 "dma_device_type": 1 00:09:21.595 }, 00:09:21.595 { 00:09:21.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.595 "dma_device_type": 2 00:09:21.595 } 00:09:21.595 ], 00:09:21.595 "driver_specific": { 00:09:21.595 "raid": { 00:09:21.595 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:21.595 "strip_size_kb": 64, 00:09:21.595 "state": "online", 00:09:21.595 "raid_level": "raid0", 00:09:21.595 "superblock": true, 00:09:21.595 "num_base_bdevs": 3, 00:09:21.595 "num_base_bdevs_discovered": 3, 00:09:21.595 "num_base_bdevs_operational": 3, 00:09:21.595 "base_bdevs_list": [ 00:09:21.595 { 00:09:21.595 "name": "pt1", 00:09:21.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.595 "is_configured": true, 00:09:21.595 "data_offset": 2048, 00:09:21.595 "data_size": 63488 00:09:21.595 }, 00:09:21.595 { 00:09:21.595 "name": "pt2", 00:09:21.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.595 "is_configured": true, 00:09:21.595 "data_offset": 2048, 00:09:21.595 "data_size": 63488 00:09:21.595 }, 00:09:21.595 { 00:09:21.595 "name": "pt3", 00:09:21.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.595 "is_configured": true, 00:09:21.595 "data_offset": 2048, 00:09:21.595 "data_size": 63488 00:09:21.595 } 00:09:21.595 ] 00:09:21.595 } 00:09:21.595 } 00:09:21.595 }' 00:09:21.595 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.596 pt2 00:09:21.596 pt3' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.596 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.855 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.855 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:21.855 08:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.855 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.855 08:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 [2024-12-13 08:20:34.001808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26 ']' 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 [2024-12-13 08:20:34.045423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.855 [2024-12-13 08:20:34.045510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.855 [2024-12-13 08:20:34.045633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.855 [2024-12-13 08:20:34.045727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.855 [2024-12-13 08:20:34.045778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:21.855 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.856 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.856 [2024-12-13 08:20:34.209261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:21.856 [2024-12-13 08:20:34.211306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:21.856 [2024-12-13 08:20:34.211399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:21.856 [2024-12-13 08:20:34.211490] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:21.856 [2024-12-13 08:20:34.211593] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:21.856 [2024-12-13 08:20:34.211654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:21.856 [2024-12-13 08:20:34.211708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.856 [2024-12-13 08:20:34.211741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:21.856 request: 00:09:21.856 { 00:09:21.856 "name": "raid_bdev1", 00:09:21.856 "raid_level": "raid0", 00:09:21.856 "base_bdevs": [ 00:09:21.856 "malloc1", 00:09:21.856 "malloc2", 00:09:22.115 "malloc3" 00:09:22.115 ], 00:09:22.115 "strip_size_kb": 64, 00:09:22.115 "superblock": false, 00:09:22.115 "method": "bdev_raid_create", 00:09:22.115 "req_id": 1 00:09:22.115 } 00:09:22.115 Got JSON-RPC error response 00:09:22.115 response: 00:09:22.115 { 00:09:22.115 "code": -17, 00:09:22.115 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:22.115 } 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.115 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.115 [2024-12-13 08:20:34.277056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:22.115 [2024-12-13 08:20:34.277179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.115 [2024-12-13 08:20:34.277205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:22.115 [2024-12-13 08:20:34.277214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.116 [2024-12-13 08:20:34.279646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.116 [2024-12-13 08:20:34.279683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:22.116 [2024-12-13 08:20:34.279793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:22.116 [2024-12-13 08:20:34.279856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:22.116 pt1 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.116 "name": "raid_bdev1", 00:09:22.116 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:22.116 "strip_size_kb": 64, 00:09:22.116 "state": "configuring", 00:09:22.116 "raid_level": "raid0", 00:09:22.116 "superblock": true, 00:09:22.116 "num_base_bdevs": 3, 00:09:22.116 "num_base_bdevs_discovered": 1, 00:09:22.116 "num_base_bdevs_operational": 3, 00:09:22.116 "base_bdevs_list": [ 00:09:22.116 { 00:09:22.116 "name": "pt1", 00:09:22.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.116 "is_configured": true, 00:09:22.116 "data_offset": 2048, 00:09:22.116 "data_size": 63488 00:09:22.116 }, 00:09:22.116 { 00:09:22.116 "name": null, 00:09:22.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.116 "is_configured": false, 00:09:22.116 "data_offset": 2048, 00:09:22.116 "data_size": 63488 00:09:22.116 }, 00:09:22.116 { 00:09:22.116 "name": null, 00:09:22.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.116 "is_configured": false, 00:09:22.116 "data_offset": 2048, 00:09:22.116 "data_size": 63488 00:09:22.116 } 00:09:22.116 ] 00:09:22.116 }' 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.116 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 [2024-12-13 08:20:34.764225] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.685 [2024-12-13 08:20:34.764381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.685 [2024-12-13 08:20:34.764424] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:22.685 [2024-12-13 08:20:34.764455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.685 [2024-12-13 08:20:34.764911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.685 [2024-12-13 08:20:34.764967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.685 [2024-12-13 08:20:34.765079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:22.685 [2024-12-13 08:20:34.765156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.685 pt2 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 [2024-12-13 08:20:34.776237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.685 "name": "raid_bdev1", 00:09:22.685 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:22.685 "strip_size_kb": 64, 00:09:22.685 "state": "configuring", 00:09:22.685 "raid_level": "raid0", 00:09:22.685 "superblock": true, 00:09:22.685 "num_base_bdevs": 3, 00:09:22.685 "num_base_bdevs_discovered": 1, 00:09:22.685 "num_base_bdevs_operational": 3, 00:09:22.685 "base_bdevs_list": [ 00:09:22.685 { 00:09:22.685 "name": "pt1", 00:09:22.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.685 "is_configured": true, 00:09:22.685 "data_offset": 2048, 00:09:22.685 "data_size": 63488 00:09:22.685 }, 00:09:22.685 { 00:09:22.685 "name": null, 00:09:22.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.685 "is_configured": false, 00:09:22.685 "data_offset": 0, 00:09:22.685 "data_size": 63488 00:09:22.685 }, 00:09:22.685 { 00:09:22.685 "name": null, 00:09:22.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.685 "is_configured": false, 00:09:22.685 "data_offset": 2048, 00:09:22.685 "data_size": 63488 00:09:22.685 } 00:09:22.685 ] 00:09:22.685 }' 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.685 08:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.952 [2024-12-13 08:20:35.279324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.952 [2024-12-13 08:20:35.279622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.952 [2024-12-13 08:20:35.279695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:22.952 [2024-12-13 08:20:35.279750] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.952 [2024-12-13 08:20:35.280318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.952 [2024-12-13 08:20:35.280423] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.952 [2024-12-13 08:20:35.280560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:22.952 [2024-12-13 08:20:35.280597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.952 pt2 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.952 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.952 [2024-12-13 08:20:35.295266] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:22.952 [2024-12-13 08:20:35.295441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.952 [2024-12-13 08:20:35.295515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:22.952 [2024-12-13 08:20:35.295566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.952 [2024-12-13 08:20:35.296083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.952 [2024-12-13 08:20:35.296200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:22.952 [2024-12-13 08:20:35.296351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:22.953 [2024-12-13 08:20:35.296382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:22.953 [2024-12-13 08:20:35.296537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:22.953 [2024-12-13 08:20:35.296549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.953 [2024-12-13 08:20:35.296807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:22.953 [2024-12-13 08:20:35.296972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:22.953 [2024-12-13 08:20:35.296981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:22.953 [2024-12-13 08:20:35.297152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.953 pt3 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.953 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.217 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.217 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.217 "name": "raid_bdev1", 00:09:23.217 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:23.217 "strip_size_kb": 64, 00:09:23.217 "state": "online", 00:09:23.217 "raid_level": "raid0", 00:09:23.217 "superblock": true, 00:09:23.217 "num_base_bdevs": 3, 00:09:23.217 "num_base_bdevs_discovered": 3, 00:09:23.217 "num_base_bdevs_operational": 3, 00:09:23.217 "base_bdevs_list": [ 00:09:23.217 { 00:09:23.217 "name": "pt1", 00:09:23.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.217 "is_configured": true, 00:09:23.217 "data_offset": 2048, 00:09:23.217 "data_size": 63488 00:09:23.217 }, 00:09:23.217 { 00:09:23.217 "name": "pt2", 00:09:23.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.217 "is_configured": true, 00:09:23.217 "data_offset": 2048, 00:09:23.217 "data_size": 63488 00:09:23.217 }, 00:09:23.217 { 00:09:23.217 "name": "pt3", 00:09:23.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.217 "is_configured": true, 00:09:23.217 "data_offset": 2048, 00:09:23.217 "data_size": 63488 00:09:23.217 } 00:09:23.217 ] 00:09:23.217 }' 00:09:23.217 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.217 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.477 [2024-12-13 08:20:35.786853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.477 "name": "raid_bdev1", 00:09:23.477 "aliases": [ 00:09:23.477 "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26" 00:09:23.477 ], 00:09:23.477 "product_name": "Raid Volume", 00:09:23.477 "block_size": 512, 00:09:23.477 "num_blocks": 190464, 00:09:23.477 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:23.477 "assigned_rate_limits": { 00:09:23.477 "rw_ios_per_sec": 0, 00:09:23.477 "rw_mbytes_per_sec": 0, 00:09:23.477 "r_mbytes_per_sec": 0, 00:09:23.477 "w_mbytes_per_sec": 0 00:09:23.477 }, 00:09:23.477 "claimed": false, 00:09:23.477 "zoned": false, 00:09:23.477 "supported_io_types": { 00:09:23.477 "read": true, 00:09:23.477 "write": true, 00:09:23.477 "unmap": true, 00:09:23.477 "flush": true, 00:09:23.477 "reset": true, 00:09:23.477 "nvme_admin": false, 00:09:23.477 "nvme_io": false, 00:09:23.477 "nvme_io_md": false, 00:09:23.477 "write_zeroes": true, 00:09:23.477 "zcopy": false, 00:09:23.477 "get_zone_info": false, 00:09:23.477 "zone_management": false, 00:09:23.477 "zone_append": false, 00:09:23.477 "compare": false, 00:09:23.477 "compare_and_write": false, 00:09:23.477 "abort": false, 00:09:23.477 "seek_hole": false, 00:09:23.477 "seek_data": false, 00:09:23.477 "copy": false, 00:09:23.477 "nvme_iov_md": false 00:09:23.477 }, 00:09:23.477 "memory_domains": [ 00:09:23.477 { 00:09:23.477 "dma_device_id": "system", 00:09:23.477 "dma_device_type": 1 00:09:23.477 }, 00:09:23.477 { 00:09:23.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.477 "dma_device_type": 2 00:09:23.477 }, 00:09:23.477 { 00:09:23.477 "dma_device_id": "system", 00:09:23.477 "dma_device_type": 1 00:09:23.477 }, 00:09:23.477 { 00:09:23.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.477 "dma_device_type": 2 00:09:23.477 }, 00:09:23.477 { 00:09:23.477 "dma_device_id": "system", 00:09:23.477 "dma_device_type": 1 00:09:23.477 }, 00:09:23.477 { 00:09:23.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.477 "dma_device_type": 2 00:09:23.477 } 00:09:23.477 ], 00:09:23.477 "driver_specific": { 00:09:23.477 "raid": { 00:09:23.477 "uuid": "d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26", 00:09:23.477 "strip_size_kb": 64, 00:09:23.477 "state": "online", 00:09:23.477 "raid_level": "raid0", 00:09:23.477 "superblock": true, 00:09:23.477 "num_base_bdevs": 3, 00:09:23.477 "num_base_bdevs_discovered": 3, 00:09:23.477 "num_base_bdevs_operational": 3, 00:09:23.477 "base_bdevs_list": [ 00:09:23.477 { 00:09:23.477 "name": "pt1", 00:09:23.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.477 "is_configured": true, 00:09:23.477 "data_offset": 2048, 00:09:23.477 "data_size": 63488 00:09:23.477 }, 00:09:23.477 { 00:09:23.477 "name": "pt2", 00:09:23.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.477 "is_configured": true, 00:09:23.477 "data_offset": 2048, 00:09:23.477 "data_size": 63488 00:09:23.477 }, 00:09:23.477 { 00:09:23.477 "name": "pt3", 00:09:23.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.477 "is_configured": true, 00:09:23.477 "data_offset": 2048, 00:09:23.477 "data_size": 63488 00:09:23.477 } 00:09:23.477 ] 00:09:23.477 } 00:09:23.477 } 00:09:23.477 }' 00:09:23.477 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:23.737 pt2 00:09:23.737 pt3' 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.737 08:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.737 [2024-12-13 08:20:36.054404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26 '!=' d7d6b7fa-24bb-481d-bc84-2fd16b2f2e26 ']' 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65220 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65220 ']' 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65220 00:09:23.737 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:23.997 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.997 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65220 00:09:23.997 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.997 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.997 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65220' 00:09:23.997 killing process with pid 65220 00:09:23.997 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65220 00:09:23.997 [2024-12-13 08:20:36.128560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.997 [2024-12-13 08:20:36.128727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.997 [2024-12-13 08:20:36.128815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.997 08:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65220 00:09:23.997 [2024-12-13 08:20:36.128863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:24.257 [2024-12-13 08:20:36.443897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.639 08:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:25.639 00:09:25.639 real 0m5.485s 00:09:25.639 user 0m7.887s 00:09:25.639 sys 0m0.945s 00:09:25.639 08:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.639 08:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.639 ************************************ 00:09:25.639 END TEST raid_superblock_test 00:09:25.639 ************************************ 00:09:25.639 08:20:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:25.639 08:20:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:25.639 08:20:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.639 08:20:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.639 ************************************ 00:09:25.639 START TEST raid_read_error_test 00:09:25.639 ************************************ 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VQVYd8pxaL 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65481 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65481 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65481 ']' 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.639 08:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.639 [2024-12-13 08:20:37.797962] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:25.639 [2024-12-13 08:20:37.798673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65481 ] 00:09:25.639 [2024-12-13 08:20:37.972609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.899 [2024-12-13 08:20:38.086809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.159 [2024-12-13 08:20:38.288426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.159 [2024-12-13 08:20:38.288571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.419 BaseBdev1_malloc 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.419 true 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.419 [2024-12-13 08:20:38.721000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:26.419 [2024-12-13 08:20:38.721068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.419 [2024-12-13 08:20:38.721091] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:26.419 [2024-12-13 08:20:38.721111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.419 [2024-12-13 08:20:38.723545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.419 [2024-12-13 08:20:38.723594] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:26.419 BaseBdev1 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.419 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.419 BaseBdev2_malloc 00:09:26.420 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.420 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:26.420 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.420 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.420 true 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.680 [2024-12-13 08:20:38.790438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:26.680 [2024-12-13 08:20:38.790580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.680 [2024-12-13 08:20:38.790617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:26.680 [2024-12-13 08:20:38.790630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.680 [2024-12-13 08:20:38.792903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.680 [2024-12-13 08:20:38.792944] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:26.680 BaseBdev2 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.680 BaseBdev3_malloc 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.680 true 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.680 [2024-12-13 08:20:38.873243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:26.680 [2024-12-13 08:20:38.873349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.680 [2024-12-13 08:20:38.873373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:26.680 [2024-12-13 08:20:38.873384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.680 [2024-12-13 08:20:38.875688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.680 [2024-12-13 08:20:38.875733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:26.680 BaseBdev3 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.680 [2024-12-13 08:20:38.885302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.680 [2024-12-13 08:20:38.887226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.680 [2024-12-13 08:20:38.887347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.680 [2024-12-13 08:20:38.887612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:26.680 [2024-12-13 08:20:38.887672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.680 [2024-12-13 08:20:38.887985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:26.680 [2024-12-13 08:20:38.888229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:26.680 [2024-12-13 08:20:38.888276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:26.680 [2024-12-13 08:20:38.888482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.680 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.680 "name": "raid_bdev1", 00:09:26.680 "uuid": "1a9c8d28-42ff-47d9-b88e-c8db7a41a1d5", 00:09:26.680 "strip_size_kb": 64, 00:09:26.680 "state": "online", 00:09:26.680 "raid_level": "raid0", 00:09:26.680 "superblock": true, 00:09:26.680 "num_base_bdevs": 3, 00:09:26.680 "num_base_bdevs_discovered": 3, 00:09:26.680 "num_base_bdevs_operational": 3, 00:09:26.680 "base_bdevs_list": [ 00:09:26.680 { 00:09:26.680 "name": "BaseBdev1", 00:09:26.680 "uuid": "f48d70d3-6431-50e1-99fa-b18dcdcbda83", 00:09:26.680 "is_configured": true, 00:09:26.680 "data_offset": 2048, 00:09:26.681 "data_size": 63488 00:09:26.681 }, 00:09:26.681 { 00:09:26.681 "name": "BaseBdev2", 00:09:26.681 "uuid": "8aa8f204-69bb-5336-b74e-963fc723fc43", 00:09:26.681 "is_configured": true, 00:09:26.681 "data_offset": 2048, 00:09:26.681 "data_size": 63488 00:09:26.681 }, 00:09:26.681 { 00:09:26.681 "name": "BaseBdev3", 00:09:26.681 "uuid": "78799c25-2e0b-5445-b1a3-3a4917f1fce7", 00:09:26.681 "is_configured": true, 00:09:26.681 "data_offset": 2048, 00:09:26.681 "data_size": 63488 00:09:26.681 } 00:09:26.681 ] 00:09:26.681 }' 00:09:26.681 08:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.681 08:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.941 08:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:26.941 08:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:27.200 [2024-12-13 08:20:39.401761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.140 "name": "raid_bdev1", 00:09:28.140 "uuid": "1a9c8d28-42ff-47d9-b88e-c8db7a41a1d5", 00:09:28.140 "strip_size_kb": 64, 00:09:28.140 "state": "online", 00:09:28.140 "raid_level": "raid0", 00:09:28.140 "superblock": true, 00:09:28.140 "num_base_bdevs": 3, 00:09:28.140 "num_base_bdevs_discovered": 3, 00:09:28.140 "num_base_bdevs_operational": 3, 00:09:28.140 "base_bdevs_list": [ 00:09:28.140 { 00:09:28.140 "name": "BaseBdev1", 00:09:28.140 "uuid": "f48d70d3-6431-50e1-99fa-b18dcdcbda83", 00:09:28.140 "is_configured": true, 00:09:28.140 "data_offset": 2048, 00:09:28.140 "data_size": 63488 00:09:28.140 }, 00:09:28.140 { 00:09:28.140 "name": "BaseBdev2", 00:09:28.140 "uuid": "8aa8f204-69bb-5336-b74e-963fc723fc43", 00:09:28.140 "is_configured": true, 00:09:28.140 "data_offset": 2048, 00:09:28.140 "data_size": 63488 00:09:28.140 }, 00:09:28.140 { 00:09:28.140 "name": "BaseBdev3", 00:09:28.140 "uuid": "78799c25-2e0b-5445-b1a3-3a4917f1fce7", 00:09:28.140 "is_configured": true, 00:09:28.140 "data_offset": 2048, 00:09:28.140 "data_size": 63488 00:09:28.140 } 00:09:28.140 ] 00:09:28.140 }' 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.140 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.711 [2024-12-13 08:20:40.806986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.711 [2024-12-13 08:20:40.807086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.711 [2024-12-13 08:20:40.810104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.711 [2024-12-13 08:20:40.810221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.711 [2024-12-13 08:20:40.810296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.711 [2024-12-13 08:20:40.810369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:28.711 { 00:09:28.711 "results": [ 00:09:28.711 { 00:09:28.711 "job": "raid_bdev1", 00:09:28.711 "core_mask": "0x1", 00:09:28.711 "workload": "randrw", 00:09:28.711 "percentage": 50, 00:09:28.711 "status": "finished", 00:09:28.711 "queue_depth": 1, 00:09:28.711 "io_size": 131072, 00:09:28.711 "runtime": 1.406101, 00:09:28.711 "iops": 14447.04185545704, 00:09:28.711 "mibps": 1805.88023193213, 00:09:28.711 "io_failed": 1, 00:09:28.711 "io_timeout": 0, 00:09:28.711 "avg_latency_us": 95.92404278895603, 00:09:28.711 "min_latency_us": 27.72401746724891, 00:09:28.711 "max_latency_us": 1609.7816593886462 00:09:28.711 } 00:09:28.711 ], 00:09:28.711 "core_count": 1 00:09:28.711 } 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65481 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65481 ']' 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65481 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65481 00:09:28.711 killing process with pid 65481 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65481' 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65481 00:09:28.711 [2024-12-13 08:20:40.843499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.711 08:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65481 00:09:28.971 [2024-12-13 08:20:41.079184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VQVYd8pxaL 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:30.351 00:09:30.351 real 0m4.624s 00:09:30.351 user 0m5.490s 00:09:30.351 sys 0m0.575s 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.351 08:20:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.351 ************************************ 00:09:30.351 END TEST raid_read_error_test 00:09:30.351 ************************************ 00:09:30.351 08:20:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:30.351 08:20:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:30.351 08:20:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.351 08:20:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.351 ************************************ 00:09:30.351 START TEST raid_write_error_test 00:09:30.351 ************************************ 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TMNxuYeILd 00:09:30.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65621 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65621 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65621 ']' 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.351 08:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.351 [2024-12-13 08:20:42.459340] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:30.351 [2024-12-13 08:20:42.459471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65621 ] 00:09:30.351 [2024-12-13 08:20:42.633746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.610 [2024-12-13 08:20:42.748222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.610 [2024-12-13 08:20:42.957931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.610 [2024-12-13 08:20:42.957970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 BaseBdev1_malloc 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 true 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 [2024-12-13 08:20:43.378660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:31.178 [2024-12-13 08:20:43.378812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.178 [2024-12-13 08:20:43.378860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:31.178 [2024-12-13 08:20:43.378892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.178 [2024-12-13 08:20:43.381211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.178 [2024-12-13 08:20:43.381301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:31.178 BaseBdev1 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 BaseBdev2_malloc 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 true 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.178 [2024-12-13 08:20:43.445951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:31.178 [2024-12-13 08:20:43.446054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.178 [2024-12-13 08:20:43.446090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:31.178 [2024-12-13 08:20:43.446128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.178 [2024-12-13 08:20:43.448221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.178 [2024-12-13 08:20:43.448297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:31.178 BaseBdev2 00:09:31.178 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.179 BaseBdev3_malloc 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.179 true 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.179 [2024-12-13 08:20:43.526331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:31.179 [2024-12-13 08:20:43.526433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.179 [2024-12-13 08:20:43.526472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:31.179 [2024-12-13 08:20:43.526523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.179 [2024-12-13 08:20:43.528806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.179 [2024-12-13 08:20:43.528882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:31.179 BaseBdev3 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.179 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.179 [2024-12-13 08:20:43.538381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.179 [2024-12-13 08:20:43.540256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.179 [2024-12-13 08:20:43.540374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.179 [2024-12-13 08:20:43.540611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:31.179 [2024-12-13 08:20:43.540661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.179 [2024-12-13 08:20:43.540941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:31.179 [2024-12-13 08:20:43.541151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:31.179 [2024-12-13 08:20:43.541198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:31.179 [2024-12-13 08:20:43.541413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.437 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.437 "name": "raid_bdev1", 00:09:31.438 "uuid": "fdf6700d-2d9a-4069-8842-6269ed956c46", 00:09:31.438 "strip_size_kb": 64, 00:09:31.438 "state": "online", 00:09:31.438 "raid_level": "raid0", 00:09:31.438 "superblock": true, 00:09:31.438 "num_base_bdevs": 3, 00:09:31.438 "num_base_bdevs_discovered": 3, 00:09:31.438 "num_base_bdevs_operational": 3, 00:09:31.438 "base_bdevs_list": [ 00:09:31.438 { 00:09:31.438 "name": "BaseBdev1", 00:09:31.438 "uuid": "4fbe7d4c-3b80-5d03-ab96-0c91fcea55dc", 00:09:31.438 "is_configured": true, 00:09:31.438 "data_offset": 2048, 00:09:31.438 "data_size": 63488 00:09:31.438 }, 00:09:31.438 { 00:09:31.438 "name": "BaseBdev2", 00:09:31.438 "uuid": "42a788c1-f880-575d-85bd-9678a6e26552", 00:09:31.438 "is_configured": true, 00:09:31.438 "data_offset": 2048, 00:09:31.438 "data_size": 63488 00:09:31.438 }, 00:09:31.438 { 00:09:31.438 "name": "BaseBdev3", 00:09:31.438 "uuid": "29894abf-1e83-5115-bb70-25ae9f274a73", 00:09:31.438 "is_configured": true, 00:09:31.438 "data_offset": 2048, 00:09:31.438 "data_size": 63488 00:09:31.438 } 00:09:31.438 ] 00:09:31.438 }' 00:09:31.438 08:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.438 08:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.697 08:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:31.697 08:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:31.956 [2024-12-13 08:20:44.134826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.965 "name": "raid_bdev1", 00:09:32.965 "uuid": "fdf6700d-2d9a-4069-8842-6269ed956c46", 00:09:32.965 "strip_size_kb": 64, 00:09:32.965 "state": "online", 00:09:32.965 "raid_level": "raid0", 00:09:32.965 "superblock": true, 00:09:32.965 "num_base_bdevs": 3, 00:09:32.965 "num_base_bdevs_discovered": 3, 00:09:32.965 "num_base_bdevs_operational": 3, 00:09:32.965 "base_bdevs_list": [ 00:09:32.965 { 00:09:32.965 "name": "BaseBdev1", 00:09:32.965 "uuid": "4fbe7d4c-3b80-5d03-ab96-0c91fcea55dc", 00:09:32.965 "is_configured": true, 00:09:32.965 "data_offset": 2048, 00:09:32.965 "data_size": 63488 00:09:32.965 }, 00:09:32.965 { 00:09:32.965 "name": "BaseBdev2", 00:09:32.965 "uuid": "42a788c1-f880-575d-85bd-9678a6e26552", 00:09:32.965 "is_configured": true, 00:09:32.965 "data_offset": 2048, 00:09:32.965 "data_size": 63488 00:09:32.965 }, 00:09:32.965 { 00:09:32.965 "name": "BaseBdev3", 00:09:32.965 "uuid": "29894abf-1e83-5115-bb70-25ae9f274a73", 00:09:32.965 "is_configured": true, 00:09:32.965 "data_offset": 2048, 00:09:32.965 "data_size": 63488 00:09:32.965 } 00:09:32.965 ] 00:09:32.965 }' 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.965 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.224 [2024-12-13 08:20:45.490514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.224 [2024-12-13 08:20:45.490600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.224 [2024-12-13 08:20:45.493513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.224 [2024-12-13 08:20:45.493597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.224 [2024-12-13 08:20:45.493656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.224 [2024-12-13 08:20:45.493700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:33.224 { 00:09:33.224 "results": [ 00:09:33.224 { 00:09:33.224 "job": "raid_bdev1", 00:09:33.224 "core_mask": "0x1", 00:09:33.224 "workload": "randrw", 00:09:33.224 "percentage": 50, 00:09:33.224 "status": "finished", 00:09:33.224 "queue_depth": 1, 00:09:33.224 "io_size": 131072, 00:09:33.224 "runtime": 1.356551, 00:09:33.224 "iops": 15089.738609163975, 00:09:33.224 "mibps": 1886.2173261454968, 00:09:33.224 "io_failed": 1, 00:09:33.224 "io_timeout": 0, 00:09:33.224 "avg_latency_us": 91.99549320916009, 00:09:33.224 "min_latency_us": 27.276855895196505, 00:09:33.224 "max_latency_us": 1359.3711790393013 00:09:33.224 } 00:09:33.224 ], 00:09:33.224 "core_count": 1 00:09:33.224 } 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65621 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65621 ']' 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65621 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65621 00:09:33.224 killing process with pid 65621 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65621' 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65621 00:09:33.224 [2024-12-13 08:20:45.539504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.224 08:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65621 00:09:33.483 [2024-12-13 08:20:45.774427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.862 08:20:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TMNxuYeILd 00:09:34.862 08:20:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:34.862 08:20:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:34.862 08:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:34.862 08:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:34.862 08:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.862 08:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:34.862 08:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:34.862 ************************************ 00:09:34.862 END TEST raid_write_error_test 00:09:34.862 ************************************ 00:09:34.862 00:09:34.862 real 0m4.626s 00:09:34.862 user 0m5.525s 00:09:34.862 sys 0m0.569s 00:09:34.862 08:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.862 08:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.862 08:20:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:34.862 08:20:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:34.862 08:20:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:34.862 08:20:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.862 08:20:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.862 ************************************ 00:09:34.862 START TEST raid_state_function_test 00:09:34.862 ************************************ 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:34.862 Process raid pid: 65765 00:09:34.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65765 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65765' 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65765 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65765 ']' 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.862 08:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:34.862 [2024-12-13 08:20:47.173371] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:34.862 [2024-12-13 08:20:47.173648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.122 [2024-12-13 08:20:47.361370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.122 [2024-12-13 08:20:47.485088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.382 [2024-12-13 08:20:47.704380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.382 [2024-12-13 08:20:47.704503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.951 [2024-12-13 08:20:48.041315] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.951 [2024-12-13 08:20:48.041430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.951 [2024-12-13 08:20:48.041466] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.951 [2024-12-13 08:20:48.041495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.951 [2024-12-13 08:20:48.041517] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.951 [2024-12-13 08:20:48.041541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.951 "name": "Existed_Raid", 00:09:35.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.951 "strip_size_kb": 64, 00:09:35.951 "state": "configuring", 00:09:35.951 "raid_level": "concat", 00:09:35.951 "superblock": false, 00:09:35.951 "num_base_bdevs": 3, 00:09:35.951 "num_base_bdevs_discovered": 0, 00:09:35.951 "num_base_bdevs_operational": 3, 00:09:35.951 "base_bdevs_list": [ 00:09:35.951 { 00:09:35.951 "name": "BaseBdev1", 00:09:35.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.951 "is_configured": false, 00:09:35.951 "data_offset": 0, 00:09:35.951 "data_size": 0 00:09:35.951 }, 00:09:35.951 { 00:09:35.951 "name": "BaseBdev2", 00:09:35.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.951 "is_configured": false, 00:09:35.951 "data_offset": 0, 00:09:35.951 "data_size": 0 00:09:35.951 }, 00:09:35.951 { 00:09:35.951 "name": "BaseBdev3", 00:09:35.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.951 "is_configured": false, 00:09:35.951 "data_offset": 0, 00:09:35.951 "data_size": 0 00:09:35.951 } 00:09:35.951 ] 00:09:35.951 }' 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.951 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 [2024-12-13 08:20:48.500475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.211 [2024-12-13 08:20:48.500565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 [2024-12-13 08:20:48.508467] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.211 [2024-12-13 08:20:48.508518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.211 [2024-12-13 08:20:48.508530] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.211 [2024-12-13 08:20:48.508541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.211 [2024-12-13 08:20:48.508548] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.211 [2024-12-13 08:20:48.508557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 [2024-12-13 08:20:48.553761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.211 BaseBdev1 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.211 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 [ 00:09:36.471 { 00:09:36.471 "name": "BaseBdev1", 00:09:36.471 "aliases": [ 00:09:36.471 "365689c2-5c2d-47ae-aad6-e33c1eb5a105" 00:09:36.471 ], 00:09:36.471 "product_name": "Malloc disk", 00:09:36.471 "block_size": 512, 00:09:36.471 "num_blocks": 65536, 00:09:36.471 "uuid": "365689c2-5c2d-47ae-aad6-e33c1eb5a105", 00:09:36.471 "assigned_rate_limits": { 00:09:36.471 "rw_ios_per_sec": 0, 00:09:36.471 "rw_mbytes_per_sec": 0, 00:09:36.471 "r_mbytes_per_sec": 0, 00:09:36.471 "w_mbytes_per_sec": 0 00:09:36.471 }, 00:09:36.471 "claimed": true, 00:09:36.471 "claim_type": "exclusive_write", 00:09:36.471 "zoned": false, 00:09:36.471 "supported_io_types": { 00:09:36.471 "read": true, 00:09:36.471 "write": true, 00:09:36.471 "unmap": true, 00:09:36.471 "flush": true, 00:09:36.471 "reset": true, 00:09:36.471 "nvme_admin": false, 00:09:36.471 "nvme_io": false, 00:09:36.471 "nvme_io_md": false, 00:09:36.471 "write_zeroes": true, 00:09:36.471 "zcopy": true, 00:09:36.471 "get_zone_info": false, 00:09:36.471 "zone_management": false, 00:09:36.471 "zone_append": false, 00:09:36.471 "compare": false, 00:09:36.471 "compare_and_write": false, 00:09:36.471 "abort": true, 00:09:36.471 "seek_hole": false, 00:09:36.471 "seek_data": false, 00:09:36.471 "copy": true, 00:09:36.471 "nvme_iov_md": false 00:09:36.471 }, 00:09:36.471 "memory_domains": [ 00:09:36.471 { 00:09:36.471 "dma_device_id": "system", 00:09:36.471 "dma_device_type": 1 00:09:36.471 }, 00:09:36.471 { 00:09:36.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.471 "dma_device_type": 2 00:09:36.471 } 00:09:36.471 ], 00:09:36.471 "driver_specific": {} 00:09:36.471 } 00:09:36.471 ] 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.471 "name": "Existed_Raid", 00:09:36.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.471 "strip_size_kb": 64, 00:09:36.471 "state": "configuring", 00:09:36.471 "raid_level": "concat", 00:09:36.471 "superblock": false, 00:09:36.471 "num_base_bdevs": 3, 00:09:36.471 "num_base_bdevs_discovered": 1, 00:09:36.471 "num_base_bdevs_operational": 3, 00:09:36.471 "base_bdevs_list": [ 00:09:36.471 { 00:09:36.471 "name": "BaseBdev1", 00:09:36.471 "uuid": "365689c2-5c2d-47ae-aad6-e33c1eb5a105", 00:09:36.471 "is_configured": true, 00:09:36.471 "data_offset": 0, 00:09:36.471 "data_size": 65536 00:09:36.471 }, 00:09:36.471 { 00:09:36.471 "name": "BaseBdev2", 00:09:36.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.471 "is_configured": false, 00:09:36.471 "data_offset": 0, 00:09:36.471 "data_size": 0 00:09:36.471 }, 00:09:36.471 { 00:09:36.471 "name": "BaseBdev3", 00:09:36.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.471 "is_configured": false, 00:09:36.471 "data_offset": 0, 00:09:36.471 "data_size": 0 00:09:36.471 } 00:09:36.471 ] 00:09:36.471 }' 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.471 08:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 [2024-12-13 08:20:49.017115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.730 [2024-12-13 08:20:49.017260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.730 [2024-12-13 08:20:49.025159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.730 [2024-12-13 08:20:49.027667] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.730 [2024-12-13 08:20:49.027787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.730 [2024-12-13 08:20:49.027842] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.730 [2024-12-13 08:20:49.027883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:36.730 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.731 "name": "Existed_Raid", 00:09:36.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.731 "strip_size_kb": 64, 00:09:36.731 "state": "configuring", 00:09:36.731 "raid_level": "concat", 00:09:36.731 "superblock": false, 00:09:36.731 "num_base_bdevs": 3, 00:09:36.731 "num_base_bdevs_discovered": 1, 00:09:36.731 "num_base_bdevs_operational": 3, 00:09:36.731 "base_bdevs_list": [ 00:09:36.731 { 00:09:36.731 "name": "BaseBdev1", 00:09:36.731 "uuid": "365689c2-5c2d-47ae-aad6-e33c1eb5a105", 00:09:36.731 "is_configured": true, 00:09:36.731 "data_offset": 0, 00:09:36.731 "data_size": 65536 00:09:36.731 }, 00:09:36.731 { 00:09:36.731 "name": "BaseBdev2", 00:09:36.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.731 "is_configured": false, 00:09:36.731 "data_offset": 0, 00:09:36.731 "data_size": 0 00:09:36.731 }, 00:09:36.731 { 00:09:36.731 "name": "BaseBdev3", 00:09:36.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.731 "is_configured": false, 00:09:36.731 "data_offset": 0, 00:09:36.731 "data_size": 0 00:09:36.731 } 00:09:36.731 ] 00:09:36.731 }' 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.731 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.346 [2024-12-13 08:20:49.485305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.346 BaseBdev2 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.346 [ 00:09:37.346 { 00:09:37.346 "name": "BaseBdev2", 00:09:37.346 "aliases": [ 00:09:37.346 "59a02d47-70d1-4488-97bc-0d1192339b5a" 00:09:37.346 ], 00:09:37.346 "product_name": "Malloc disk", 00:09:37.346 "block_size": 512, 00:09:37.346 "num_blocks": 65536, 00:09:37.346 "uuid": "59a02d47-70d1-4488-97bc-0d1192339b5a", 00:09:37.346 "assigned_rate_limits": { 00:09:37.346 "rw_ios_per_sec": 0, 00:09:37.346 "rw_mbytes_per_sec": 0, 00:09:37.346 "r_mbytes_per_sec": 0, 00:09:37.346 "w_mbytes_per_sec": 0 00:09:37.346 }, 00:09:37.346 "claimed": true, 00:09:37.346 "claim_type": "exclusive_write", 00:09:37.346 "zoned": false, 00:09:37.346 "supported_io_types": { 00:09:37.346 "read": true, 00:09:37.346 "write": true, 00:09:37.346 "unmap": true, 00:09:37.346 "flush": true, 00:09:37.346 "reset": true, 00:09:37.346 "nvme_admin": false, 00:09:37.346 "nvme_io": false, 00:09:37.346 "nvme_io_md": false, 00:09:37.346 "write_zeroes": true, 00:09:37.346 "zcopy": true, 00:09:37.346 "get_zone_info": false, 00:09:37.346 "zone_management": false, 00:09:37.346 "zone_append": false, 00:09:37.346 "compare": false, 00:09:37.346 "compare_and_write": false, 00:09:37.346 "abort": true, 00:09:37.346 "seek_hole": false, 00:09:37.346 "seek_data": false, 00:09:37.346 "copy": true, 00:09:37.346 "nvme_iov_md": false 00:09:37.346 }, 00:09:37.346 "memory_domains": [ 00:09:37.346 { 00:09:37.346 "dma_device_id": "system", 00:09:37.346 "dma_device_type": 1 00:09:37.346 }, 00:09:37.346 { 00:09:37.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.346 "dma_device_type": 2 00:09:37.346 } 00:09:37.346 ], 00:09:37.346 "driver_specific": {} 00:09:37.346 } 00:09:37.346 ] 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.346 "name": "Existed_Raid", 00:09:37.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.346 "strip_size_kb": 64, 00:09:37.346 "state": "configuring", 00:09:37.346 "raid_level": "concat", 00:09:37.346 "superblock": false, 00:09:37.346 "num_base_bdevs": 3, 00:09:37.346 "num_base_bdevs_discovered": 2, 00:09:37.346 "num_base_bdevs_operational": 3, 00:09:37.346 "base_bdevs_list": [ 00:09:37.346 { 00:09:37.346 "name": "BaseBdev1", 00:09:37.346 "uuid": "365689c2-5c2d-47ae-aad6-e33c1eb5a105", 00:09:37.346 "is_configured": true, 00:09:37.346 "data_offset": 0, 00:09:37.346 "data_size": 65536 00:09:37.346 }, 00:09:37.346 { 00:09:37.346 "name": "BaseBdev2", 00:09:37.346 "uuid": "59a02d47-70d1-4488-97bc-0d1192339b5a", 00:09:37.346 "is_configured": true, 00:09:37.346 "data_offset": 0, 00:09:37.346 "data_size": 65536 00:09:37.346 }, 00:09:37.346 { 00:09:37.346 "name": "BaseBdev3", 00:09:37.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.346 "is_configured": false, 00:09:37.346 "data_offset": 0, 00:09:37.346 "data_size": 0 00:09:37.346 } 00:09:37.346 ] 00:09:37.346 }' 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.346 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.605 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.605 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.605 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.863 [2024-12-13 08:20:49.989969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.863 [2024-12-13 08:20:49.990130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:37.863 [2024-12-13 08:20:49.990165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:37.863 [2024-12-13 08:20:49.990535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:37.863 [2024-12-13 08:20:49.990774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:37.863 [2024-12-13 08:20:49.990824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:37.863 [2024-12-13 08:20:49.991187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.863 BaseBdev3 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.863 08:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.863 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.863 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.863 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.863 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.863 [ 00:09:37.863 { 00:09:37.863 "name": "BaseBdev3", 00:09:37.863 "aliases": [ 00:09:37.863 "d180f9ff-2318-44d9-96d2-84939b17ff8b" 00:09:37.863 ], 00:09:37.863 "product_name": "Malloc disk", 00:09:37.863 "block_size": 512, 00:09:37.863 "num_blocks": 65536, 00:09:37.863 "uuid": "d180f9ff-2318-44d9-96d2-84939b17ff8b", 00:09:37.863 "assigned_rate_limits": { 00:09:37.863 "rw_ios_per_sec": 0, 00:09:37.863 "rw_mbytes_per_sec": 0, 00:09:37.863 "r_mbytes_per_sec": 0, 00:09:37.863 "w_mbytes_per_sec": 0 00:09:37.863 }, 00:09:37.863 "claimed": true, 00:09:37.863 "claim_type": "exclusive_write", 00:09:37.863 "zoned": false, 00:09:37.863 "supported_io_types": { 00:09:37.863 "read": true, 00:09:37.863 "write": true, 00:09:37.863 "unmap": true, 00:09:37.863 "flush": true, 00:09:37.863 "reset": true, 00:09:37.863 "nvme_admin": false, 00:09:37.863 "nvme_io": false, 00:09:37.864 "nvme_io_md": false, 00:09:37.864 "write_zeroes": true, 00:09:37.864 "zcopy": true, 00:09:37.864 "get_zone_info": false, 00:09:37.864 "zone_management": false, 00:09:37.864 "zone_append": false, 00:09:37.864 "compare": false, 00:09:37.864 "compare_and_write": false, 00:09:37.864 "abort": true, 00:09:37.864 "seek_hole": false, 00:09:37.864 "seek_data": false, 00:09:37.864 "copy": true, 00:09:37.864 "nvme_iov_md": false 00:09:37.864 }, 00:09:37.864 "memory_domains": [ 00:09:37.864 { 00:09:37.864 "dma_device_id": "system", 00:09:37.864 "dma_device_type": 1 00:09:37.864 }, 00:09:37.864 { 00:09:37.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.864 "dma_device_type": 2 00:09:37.864 } 00:09:37.864 ], 00:09:37.864 "driver_specific": {} 00:09:37.864 } 00:09:37.864 ] 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.864 "name": "Existed_Raid", 00:09:37.864 "uuid": "09e8e72c-4304-480d-a14d-3a3c85e1fd84", 00:09:37.864 "strip_size_kb": 64, 00:09:37.864 "state": "online", 00:09:37.864 "raid_level": "concat", 00:09:37.864 "superblock": false, 00:09:37.864 "num_base_bdevs": 3, 00:09:37.864 "num_base_bdevs_discovered": 3, 00:09:37.864 "num_base_bdevs_operational": 3, 00:09:37.864 "base_bdevs_list": [ 00:09:37.864 { 00:09:37.864 "name": "BaseBdev1", 00:09:37.864 "uuid": "365689c2-5c2d-47ae-aad6-e33c1eb5a105", 00:09:37.864 "is_configured": true, 00:09:37.864 "data_offset": 0, 00:09:37.864 "data_size": 65536 00:09:37.864 }, 00:09:37.864 { 00:09:37.864 "name": "BaseBdev2", 00:09:37.864 "uuid": "59a02d47-70d1-4488-97bc-0d1192339b5a", 00:09:37.864 "is_configured": true, 00:09:37.864 "data_offset": 0, 00:09:37.864 "data_size": 65536 00:09:37.864 }, 00:09:37.864 { 00:09:37.864 "name": "BaseBdev3", 00:09:37.864 "uuid": "d180f9ff-2318-44d9-96d2-84939b17ff8b", 00:09:37.864 "is_configured": true, 00:09:37.864 "data_offset": 0, 00:09:37.864 "data_size": 65536 00:09:37.864 } 00:09:37.864 ] 00:09:37.864 }' 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.864 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.123 [2024-12-13 08:20:50.417617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.123 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.123 "name": "Existed_Raid", 00:09:38.123 "aliases": [ 00:09:38.123 "09e8e72c-4304-480d-a14d-3a3c85e1fd84" 00:09:38.123 ], 00:09:38.123 "product_name": "Raid Volume", 00:09:38.123 "block_size": 512, 00:09:38.123 "num_blocks": 196608, 00:09:38.123 "uuid": "09e8e72c-4304-480d-a14d-3a3c85e1fd84", 00:09:38.123 "assigned_rate_limits": { 00:09:38.123 "rw_ios_per_sec": 0, 00:09:38.123 "rw_mbytes_per_sec": 0, 00:09:38.123 "r_mbytes_per_sec": 0, 00:09:38.123 "w_mbytes_per_sec": 0 00:09:38.123 }, 00:09:38.123 "claimed": false, 00:09:38.123 "zoned": false, 00:09:38.123 "supported_io_types": { 00:09:38.123 "read": true, 00:09:38.123 "write": true, 00:09:38.123 "unmap": true, 00:09:38.123 "flush": true, 00:09:38.123 "reset": true, 00:09:38.123 "nvme_admin": false, 00:09:38.123 "nvme_io": false, 00:09:38.123 "nvme_io_md": false, 00:09:38.123 "write_zeroes": true, 00:09:38.123 "zcopy": false, 00:09:38.123 "get_zone_info": false, 00:09:38.123 "zone_management": false, 00:09:38.123 "zone_append": false, 00:09:38.123 "compare": false, 00:09:38.123 "compare_and_write": false, 00:09:38.123 "abort": false, 00:09:38.123 "seek_hole": false, 00:09:38.123 "seek_data": false, 00:09:38.123 "copy": false, 00:09:38.123 "nvme_iov_md": false 00:09:38.123 }, 00:09:38.123 "memory_domains": [ 00:09:38.123 { 00:09:38.123 "dma_device_id": "system", 00:09:38.123 "dma_device_type": 1 00:09:38.123 }, 00:09:38.123 { 00:09:38.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.123 "dma_device_type": 2 00:09:38.123 }, 00:09:38.123 { 00:09:38.123 "dma_device_id": "system", 00:09:38.123 "dma_device_type": 1 00:09:38.123 }, 00:09:38.123 { 00:09:38.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.123 "dma_device_type": 2 00:09:38.123 }, 00:09:38.123 { 00:09:38.123 "dma_device_id": "system", 00:09:38.123 "dma_device_type": 1 00:09:38.123 }, 00:09:38.123 { 00:09:38.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.123 "dma_device_type": 2 00:09:38.123 } 00:09:38.123 ], 00:09:38.123 "driver_specific": { 00:09:38.123 "raid": { 00:09:38.123 "uuid": "09e8e72c-4304-480d-a14d-3a3c85e1fd84", 00:09:38.123 "strip_size_kb": 64, 00:09:38.123 "state": "online", 00:09:38.123 "raid_level": "concat", 00:09:38.123 "superblock": false, 00:09:38.123 "num_base_bdevs": 3, 00:09:38.123 "num_base_bdevs_discovered": 3, 00:09:38.123 "num_base_bdevs_operational": 3, 00:09:38.123 "base_bdevs_list": [ 00:09:38.123 { 00:09:38.123 "name": "BaseBdev1", 00:09:38.123 "uuid": "365689c2-5c2d-47ae-aad6-e33c1eb5a105", 00:09:38.123 "is_configured": true, 00:09:38.123 "data_offset": 0, 00:09:38.123 "data_size": 65536 00:09:38.123 }, 00:09:38.123 { 00:09:38.123 "name": "BaseBdev2", 00:09:38.123 "uuid": "59a02d47-70d1-4488-97bc-0d1192339b5a", 00:09:38.123 "is_configured": true, 00:09:38.123 "data_offset": 0, 00:09:38.123 "data_size": 65536 00:09:38.123 }, 00:09:38.123 { 00:09:38.124 "name": "BaseBdev3", 00:09:38.124 "uuid": "d180f9ff-2318-44d9-96d2-84939b17ff8b", 00:09:38.124 "is_configured": true, 00:09:38.124 "data_offset": 0, 00:09:38.124 "data_size": 65536 00:09:38.124 } 00:09:38.124 ] 00:09:38.124 } 00:09:38.124 } 00:09:38.124 }' 00:09:38.124 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.384 BaseBdev2 00:09:38.384 BaseBdev3' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.384 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.384 [2024-12-13 08:20:50.724873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.384 [2024-12-13 08:20:50.724947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.384 [2024-12-13 08:20:50.725025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.644 "name": "Existed_Raid", 00:09:38.644 "uuid": "09e8e72c-4304-480d-a14d-3a3c85e1fd84", 00:09:38.644 "strip_size_kb": 64, 00:09:38.644 "state": "offline", 00:09:38.644 "raid_level": "concat", 00:09:38.644 "superblock": false, 00:09:38.644 "num_base_bdevs": 3, 00:09:38.644 "num_base_bdevs_discovered": 2, 00:09:38.644 "num_base_bdevs_operational": 2, 00:09:38.644 "base_bdevs_list": [ 00:09:38.644 { 00:09:38.644 "name": null, 00:09:38.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.644 "is_configured": false, 00:09:38.644 "data_offset": 0, 00:09:38.644 "data_size": 65536 00:09:38.644 }, 00:09:38.644 { 00:09:38.644 "name": "BaseBdev2", 00:09:38.644 "uuid": "59a02d47-70d1-4488-97bc-0d1192339b5a", 00:09:38.644 "is_configured": true, 00:09:38.644 "data_offset": 0, 00:09:38.644 "data_size": 65536 00:09:38.644 }, 00:09:38.644 { 00:09:38.644 "name": "BaseBdev3", 00:09:38.644 "uuid": "d180f9ff-2318-44d9-96d2-84939b17ff8b", 00:09:38.644 "is_configured": true, 00:09:38.644 "data_offset": 0, 00:09:38.644 "data_size": 65536 00:09:38.644 } 00:09:38.644 ] 00:09:38.644 }' 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.644 08:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.903 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:38.903 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.903 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.904 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:38.904 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.904 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.163 [2024-12-13 08:20:51.300844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.163 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.163 [2024-12-13 08:20:51.457391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.163 [2024-12-13 08:20:51.457494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.424 BaseBdev2 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.424 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.424 [ 00:09:39.424 { 00:09:39.424 "name": "BaseBdev2", 00:09:39.424 "aliases": [ 00:09:39.424 "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2" 00:09:39.424 ], 00:09:39.424 "product_name": "Malloc disk", 00:09:39.424 "block_size": 512, 00:09:39.424 "num_blocks": 65536, 00:09:39.424 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:39.424 "assigned_rate_limits": { 00:09:39.424 "rw_ios_per_sec": 0, 00:09:39.424 "rw_mbytes_per_sec": 0, 00:09:39.424 "r_mbytes_per_sec": 0, 00:09:39.424 "w_mbytes_per_sec": 0 00:09:39.424 }, 00:09:39.424 "claimed": false, 00:09:39.424 "zoned": false, 00:09:39.424 "supported_io_types": { 00:09:39.424 "read": true, 00:09:39.424 "write": true, 00:09:39.424 "unmap": true, 00:09:39.424 "flush": true, 00:09:39.424 "reset": true, 00:09:39.424 "nvme_admin": false, 00:09:39.424 "nvme_io": false, 00:09:39.424 "nvme_io_md": false, 00:09:39.424 "write_zeroes": true, 00:09:39.424 "zcopy": true, 00:09:39.424 "get_zone_info": false, 00:09:39.424 "zone_management": false, 00:09:39.424 "zone_append": false, 00:09:39.424 "compare": false, 00:09:39.424 "compare_and_write": false, 00:09:39.424 "abort": true, 00:09:39.424 "seek_hole": false, 00:09:39.424 "seek_data": false, 00:09:39.424 "copy": true, 00:09:39.424 "nvme_iov_md": false 00:09:39.424 }, 00:09:39.424 "memory_domains": [ 00:09:39.424 { 00:09:39.424 "dma_device_id": "system", 00:09:39.424 "dma_device_type": 1 00:09:39.424 }, 00:09:39.424 { 00:09:39.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.425 "dma_device_type": 2 00:09:39.425 } 00:09:39.425 ], 00:09:39.425 "driver_specific": {} 00:09:39.425 } 00:09:39.425 ] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.425 BaseBdev3 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.425 [ 00:09:39.425 { 00:09:39.425 "name": "BaseBdev3", 00:09:39.425 "aliases": [ 00:09:39.425 "df3073a0-dc4c-4220-bb4c-287bc6c822bf" 00:09:39.425 ], 00:09:39.425 "product_name": "Malloc disk", 00:09:39.425 "block_size": 512, 00:09:39.425 "num_blocks": 65536, 00:09:39.425 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:39.425 "assigned_rate_limits": { 00:09:39.425 "rw_ios_per_sec": 0, 00:09:39.425 "rw_mbytes_per_sec": 0, 00:09:39.425 "r_mbytes_per_sec": 0, 00:09:39.425 "w_mbytes_per_sec": 0 00:09:39.425 }, 00:09:39.425 "claimed": false, 00:09:39.425 "zoned": false, 00:09:39.425 "supported_io_types": { 00:09:39.425 "read": true, 00:09:39.425 "write": true, 00:09:39.425 "unmap": true, 00:09:39.425 "flush": true, 00:09:39.425 "reset": true, 00:09:39.425 "nvme_admin": false, 00:09:39.425 "nvme_io": false, 00:09:39.425 "nvme_io_md": false, 00:09:39.425 "write_zeroes": true, 00:09:39.425 "zcopy": true, 00:09:39.425 "get_zone_info": false, 00:09:39.425 "zone_management": false, 00:09:39.425 "zone_append": false, 00:09:39.425 "compare": false, 00:09:39.425 "compare_and_write": false, 00:09:39.425 "abort": true, 00:09:39.425 "seek_hole": false, 00:09:39.425 "seek_data": false, 00:09:39.425 "copy": true, 00:09:39.425 "nvme_iov_md": false 00:09:39.425 }, 00:09:39.425 "memory_domains": [ 00:09:39.425 { 00:09:39.425 "dma_device_id": "system", 00:09:39.425 "dma_device_type": 1 00:09:39.425 }, 00:09:39.425 { 00:09:39.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.425 "dma_device_type": 2 00:09:39.425 } 00:09:39.425 ], 00:09:39.425 "driver_specific": {} 00:09:39.425 } 00:09:39.425 ] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.425 [2024-12-13 08:20:51.774925] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.425 [2024-12-13 08:20:51.775041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.425 [2024-12-13 08:20:51.775084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.425 [2024-12-13 08:20:51.777016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.425 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.685 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.685 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.685 "name": "Existed_Raid", 00:09:39.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.685 "strip_size_kb": 64, 00:09:39.685 "state": "configuring", 00:09:39.685 "raid_level": "concat", 00:09:39.685 "superblock": false, 00:09:39.685 "num_base_bdevs": 3, 00:09:39.685 "num_base_bdevs_discovered": 2, 00:09:39.685 "num_base_bdevs_operational": 3, 00:09:39.685 "base_bdevs_list": [ 00:09:39.685 { 00:09:39.685 "name": "BaseBdev1", 00:09:39.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.685 "is_configured": false, 00:09:39.685 "data_offset": 0, 00:09:39.685 "data_size": 0 00:09:39.685 }, 00:09:39.685 { 00:09:39.685 "name": "BaseBdev2", 00:09:39.685 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:39.685 "is_configured": true, 00:09:39.685 "data_offset": 0, 00:09:39.685 "data_size": 65536 00:09:39.685 }, 00:09:39.685 { 00:09:39.685 "name": "BaseBdev3", 00:09:39.685 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:39.685 "is_configured": true, 00:09:39.685 "data_offset": 0, 00:09:39.685 "data_size": 65536 00:09:39.685 } 00:09:39.685 ] 00:09:39.685 }' 00:09:39.685 08:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.685 08:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.945 [2024-12-13 08:20:52.234196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.945 "name": "Existed_Raid", 00:09:39.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.945 "strip_size_kb": 64, 00:09:39.945 "state": "configuring", 00:09:39.945 "raid_level": "concat", 00:09:39.945 "superblock": false, 00:09:39.945 "num_base_bdevs": 3, 00:09:39.945 "num_base_bdevs_discovered": 1, 00:09:39.945 "num_base_bdevs_operational": 3, 00:09:39.945 "base_bdevs_list": [ 00:09:39.945 { 00:09:39.945 "name": "BaseBdev1", 00:09:39.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.945 "is_configured": false, 00:09:39.945 "data_offset": 0, 00:09:39.945 "data_size": 0 00:09:39.945 }, 00:09:39.945 { 00:09:39.945 "name": null, 00:09:39.945 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:39.945 "is_configured": false, 00:09:39.945 "data_offset": 0, 00:09:39.945 "data_size": 65536 00:09:39.945 }, 00:09:39.945 { 00:09:39.945 "name": "BaseBdev3", 00:09:39.945 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:39.945 "is_configured": true, 00:09:39.945 "data_offset": 0, 00:09:39.945 "data_size": 65536 00:09:39.945 } 00:09:39.945 ] 00:09:39.945 }' 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.945 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.514 [2024-12-13 08:20:52.757720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.514 BaseBdev1 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.514 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.515 [ 00:09:40.515 { 00:09:40.515 "name": "BaseBdev1", 00:09:40.515 "aliases": [ 00:09:40.515 "63b589da-4023-4854-9d9f-b33eb74cd2fc" 00:09:40.515 ], 00:09:40.515 "product_name": "Malloc disk", 00:09:40.515 "block_size": 512, 00:09:40.515 "num_blocks": 65536, 00:09:40.515 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:40.515 "assigned_rate_limits": { 00:09:40.515 "rw_ios_per_sec": 0, 00:09:40.515 "rw_mbytes_per_sec": 0, 00:09:40.515 "r_mbytes_per_sec": 0, 00:09:40.515 "w_mbytes_per_sec": 0 00:09:40.515 }, 00:09:40.515 "claimed": true, 00:09:40.515 "claim_type": "exclusive_write", 00:09:40.515 "zoned": false, 00:09:40.515 "supported_io_types": { 00:09:40.515 "read": true, 00:09:40.515 "write": true, 00:09:40.515 "unmap": true, 00:09:40.515 "flush": true, 00:09:40.515 "reset": true, 00:09:40.515 "nvme_admin": false, 00:09:40.515 "nvme_io": false, 00:09:40.515 "nvme_io_md": false, 00:09:40.515 "write_zeroes": true, 00:09:40.515 "zcopy": true, 00:09:40.515 "get_zone_info": false, 00:09:40.515 "zone_management": false, 00:09:40.515 "zone_append": false, 00:09:40.515 "compare": false, 00:09:40.515 "compare_and_write": false, 00:09:40.515 "abort": true, 00:09:40.515 "seek_hole": false, 00:09:40.515 "seek_data": false, 00:09:40.515 "copy": true, 00:09:40.515 "nvme_iov_md": false 00:09:40.515 }, 00:09:40.515 "memory_domains": [ 00:09:40.515 { 00:09:40.515 "dma_device_id": "system", 00:09:40.515 "dma_device_type": 1 00:09:40.515 }, 00:09:40.515 { 00:09:40.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.515 "dma_device_type": 2 00:09:40.515 } 00:09:40.515 ], 00:09:40.515 "driver_specific": {} 00:09:40.515 } 00:09:40.515 ] 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.515 "name": "Existed_Raid", 00:09:40.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.515 "strip_size_kb": 64, 00:09:40.515 "state": "configuring", 00:09:40.515 "raid_level": "concat", 00:09:40.515 "superblock": false, 00:09:40.515 "num_base_bdevs": 3, 00:09:40.515 "num_base_bdevs_discovered": 2, 00:09:40.515 "num_base_bdevs_operational": 3, 00:09:40.515 "base_bdevs_list": [ 00:09:40.515 { 00:09:40.515 "name": "BaseBdev1", 00:09:40.515 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:40.515 "is_configured": true, 00:09:40.515 "data_offset": 0, 00:09:40.515 "data_size": 65536 00:09:40.515 }, 00:09:40.515 { 00:09:40.515 "name": null, 00:09:40.515 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:40.515 "is_configured": false, 00:09:40.515 "data_offset": 0, 00:09:40.515 "data_size": 65536 00:09:40.515 }, 00:09:40.515 { 00:09:40.515 "name": "BaseBdev3", 00:09:40.515 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:40.515 "is_configured": true, 00:09:40.515 "data_offset": 0, 00:09:40.515 "data_size": 65536 00:09:40.515 } 00:09:40.515 ] 00:09:40.515 }' 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.515 08:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.085 [2024-12-13 08:20:53.280918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.085 "name": "Existed_Raid", 00:09:41.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.085 "strip_size_kb": 64, 00:09:41.085 "state": "configuring", 00:09:41.085 "raid_level": "concat", 00:09:41.085 "superblock": false, 00:09:41.085 "num_base_bdevs": 3, 00:09:41.085 "num_base_bdevs_discovered": 1, 00:09:41.085 "num_base_bdevs_operational": 3, 00:09:41.085 "base_bdevs_list": [ 00:09:41.085 { 00:09:41.085 "name": "BaseBdev1", 00:09:41.085 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:41.085 "is_configured": true, 00:09:41.085 "data_offset": 0, 00:09:41.085 "data_size": 65536 00:09:41.085 }, 00:09:41.085 { 00:09:41.085 "name": null, 00:09:41.085 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:41.085 "is_configured": false, 00:09:41.085 "data_offset": 0, 00:09:41.085 "data_size": 65536 00:09:41.085 }, 00:09:41.085 { 00:09:41.085 "name": null, 00:09:41.085 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:41.085 "is_configured": false, 00:09:41.085 "data_offset": 0, 00:09:41.085 "data_size": 65536 00:09:41.085 } 00:09:41.085 ] 00:09:41.085 }' 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.085 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.653 [2024-12-13 08:20:53.804037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.653 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.653 "name": "Existed_Raid", 00:09:41.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.653 "strip_size_kb": 64, 00:09:41.653 "state": "configuring", 00:09:41.653 "raid_level": "concat", 00:09:41.653 "superblock": false, 00:09:41.653 "num_base_bdevs": 3, 00:09:41.653 "num_base_bdevs_discovered": 2, 00:09:41.653 "num_base_bdevs_operational": 3, 00:09:41.653 "base_bdevs_list": [ 00:09:41.653 { 00:09:41.653 "name": "BaseBdev1", 00:09:41.653 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:41.653 "is_configured": true, 00:09:41.653 "data_offset": 0, 00:09:41.653 "data_size": 65536 00:09:41.653 }, 00:09:41.653 { 00:09:41.653 "name": null, 00:09:41.653 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:41.653 "is_configured": false, 00:09:41.654 "data_offset": 0, 00:09:41.654 "data_size": 65536 00:09:41.654 }, 00:09:41.654 { 00:09:41.654 "name": "BaseBdev3", 00:09:41.654 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:41.654 "is_configured": true, 00:09:41.654 "data_offset": 0, 00:09:41.654 "data_size": 65536 00:09:41.654 } 00:09:41.654 ] 00:09:41.654 }' 00:09:41.654 08:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.654 08:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.914 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.914 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.914 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.914 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.914 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.172 [2024-12-13 08:20:54.303231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.172 "name": "Existed_Raid", 00:09:42.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.172 "strip_size_kb": 64, 00:09:42.172 "state": "configuring", 00:09:42.172 "raid_level": "concat", 00:09:42.172 "superblock": false, 00:09:42.172 "num_base_bdevs": 3, 00:09:42.172 "num_base_bdevs_discovered": 1, 00:09:42.172 "num_base_bdevs_operational": 3, 00:09:42.172 "base_bdevs_list": [ 00:09:42.172 { 00:09:42.172 "name": null, 00:09:42.172 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:42.172 "is_configured": false, 00:09:42.172 "data_offset": 0, 00:09:42.172 "data_size": 65536 00:09:42.172 }, 00:09:42.172 { 00:09:42.172 "name": null, 00:09:42.172 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:42.172 "is_configured": false, 00:09:42.172 "data_offset": 0, 00:09:42.172 "data_size": 65536 00:09:42.172 }, 00:09:42.172 { 00:09:42.172 "name": "BaseBdev3", 00:09:42.172 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:42.172 "is_configured": true, 00:09:42.172 "data_offset": 0, 00:09:42.172 "data_size": 65536 00:09:42.172 } 00:09:42.172 ] 00:09:42.172 }' 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.172 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.741 [2024-12-13 08:20:54.853080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.741 "name": "Existed_Raid", 00:09:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.741 "strip_size_kb": 64, 00:09:42.741 "state": "configuring", 00:09:42.741 "raid_level": "concat", 00:09:42.741 "superblock": false, 00:09:42.741 "num_base_bdevs": 3, 00:09:42.741 "num_base_bdevs_discovered": 2, 00:09:42.741 "num_base_bdevs_operational": 3, 00:09:42.741 "base_bdevs_list": [ 00:09:42.741 { 00:09:42.741 "name": null, 00:09:42.741 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:42.741 "is_configured": false, 00:09:42.741 "data_offset": 0, 00:09:42.741 "data_size": 65536 00:09:42.741 }, 00:09:42.741 { 00:09:42.741 "name": "BaseBdev2", 00:09:42.741 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:42.741 "is_configured": true, 00:09:42.741 "data_offset": 0, 00:09:42.741 "data_size": 65536 00:09:42.741 }, 00:09:42.741 { 00:09:42.741 "name": "BaseBdev3", 00:09:42.741 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:42.741 "is_configured": true, 00:09:42.741 "data_offset": 0, 00:09:42.741 "data_size": 65536 00:09:42.741 } 00:09:42.741 ] 00:09:42.741 }' 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.741 08:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.001 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 63b589da-4023-4854-9d9f-b33eb74cd2fc 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 [2024-12-13 08:20:55.441367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:43.269 [2024-12-13 08:20:55.441417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:43.269 [2024-12-13 08:20:55.441426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:43.269 [2024-12-13 08:20:55.441666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:43.269 [2024-12-13 08:20:55.441805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:43.269 [2024-12-13 08:20:55.441814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:43.269 [2024-12-13 08:20:55.442108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.269 NewBaseBdev 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 [ 00:09:43.269 { 00:09:43.269 "name": "NewBaseBdev", 00:09:43.269 "aliases": [ 00:09:43.269 "63b589da-4023-4854-9d9f-b33eb74cd2fc" 00:09:43.269 ], 00:09:43.269 "product_name": "Malloc disk", 00:09:43.269 "block_size": 512, 00:09:43.269 "num_blocks": 65536, 00:09:43.269 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:43.269 "assigned_rate_limits": { 00:09:43.269 "rw_ios_per_sec": 0, 00:09:43.269 "rw_mbytes_per_sec": 0, 00:09:43.269 "r_mbytes_per_sec": 0, 00:09:43.269 "w_mbytes_per_sec": 0 00:09:43.269 }, 00:09:43.269 "claimed": true, 00:09:43.269 "claim_type": "exclusive_write", 00:09:43.269 "zoned": false, 00:09:43.269 "supported_io_types": { 00:09:43.269 "read": true, 00:09:43.269 "write": true, 00:09:43.269 "unmap": true, 00:09:43.269 "flush": true, 00:09:43.269 "reset": true, 00:09:43.269 "nvme_admin": false, 00:09:43.269 "nvme_io": false, 00:09:43.269 "nvme_io_md": false, 00:09:43.269 "write_zeroes": true, 00:09:43.269 "zcopy": true, 00:09:43.269 "get_zone_info": false, 00:09:43.269 "zone_management": false, 00:09:43.269 "zone_append": false, 00:09:43.269 "compare": false, 00:09:43.269 "compare_and_write": false, 00:09:43.269 "abort": true, 00:09:43.269 "seek_hole": false, 00:09:43.269 "seek_data": false, 00:09:43.269 "copy": true, 00:09:43.269 "nvme_iov_md": false 00:09:43.269 }, 00:09:43.269 "memory_domains": [ 00:09:43.269 { 00:09:43.269 "dma_device_id": "system", 00:09:43.269 "dma_device_type": 1 00:09:43.269 }, 00:09:43.269 { 00:09:43.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.269 "dma_device_type": 2 00:09:43.269 } 00:09:43.269 ], 00:09:43.269 "driver_specific": {} 00:09:43.269 } 00:09:43.269 ] 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.269 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.269 "name": "Existed_Raid", 00:09:43.269 "uuid": "57c259d2-ddec-4562-9b78-4503f9ea7fb6", 00:09:43.269 "strip_size_kb": 64, 00:09:43.269 "state": "online", 00:09:43.269 "raid_level": "concat", 00:09:43.269 "superblock": false, 00:09:43.269 "num_base_bdevs": 3, 00:09:43.269 "num_base_bdevs_discovered": 3, 00:09:43.269 "num_base_bdevs_operational": 3, 00:09:43.269 "base_bdevs_list": [ 00:09:43.269 { 00:09:43.269 "name": "NewBaseBdev", 00:09:43.269 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:43.269 "is_configured": true, 00:09:43.270 "data_offset": 0, 00:09:43.270 "data_size": 65536 00:09:43.270 }, 00:09:43.270 { 00:09:43.270 "name": "BaseBdev2", 00:09:43.270 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:43.270 "is_configured": true, 00:09:43.270 "data_offset": 0, 00:09:43.270 "data_size": 65536 00:09:43.270 }, 00:09:43.270 { 00:09:43.270 "name": "BaseBdev3", 00:09:43.270 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:43.270 "is_configured": true, 00:09:43.270 "data_offset": 0, 00:09:43.270 "data_size": 65536 00:09:43.270 } 00:09:43.270 ] 00:09:43.270 }' 00:09:43.270 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.270 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.840 [2024-12-13 08:20:55.984841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.840 08:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.840 "name": "Existed_Raid", 00:09:43.840 "aliases": [ 00:09:43.840 "57c259d2-ddec-4562-9b78-4503f9ea7fb6" 00:09:43.840 ], 00:09:43.840 "product_name": "Raid Volume", 00:09:43.840 "block_size": 512, 00:09:43.840 "num_blocks": 196608, 00:09:43.840 "uuid": "57c259d2-ddec-4562-9b78-4503f9ea7fb6", 00:09:43.840 "assigned_rate_limits": { 00:09:43.840 "rw_ios_per_sec": 0, 00:09:43.840 "rw_mbytes_per_sec": 0, 00:09:43.840 "r_mbytes_per_sec": 0, 00:09:43.840 "w_mbytes_per_sec": 0 00:09:43.840 }, 00:09:43.840 "claimed": false, 00:09:43.840 "zoned": false, 00:09:43.840 "supported_io_types": { 00:09:43.840 "read": true, 00:09:43.840 "write": true, 00:09:43.840 "unmap": true, 00:09:43.840 "flush": true, 00:09:43.840 "reset": true, 00:09:43.840 "nvme_admin": false, 00:09:43.840 "nvme_io": false, 00:09:43.840 "nvme_io_md": false, 00:09:43.840 "write_zeroes": true, 00:09:43.840 "zcopy": false, 00:09:43.840 "get_zone_info": false, 00:09:43.840 "zone_management": false, 00:09:43.840 "zone_append": false, 00:09:43.840 "compare": false, 00:09:43.840 "compare_and_write": false, 00:09:43.840 "abort": false, 00:09:43.840 "seek_hole": false, 00:09:43.840 "seek_data": false, 00:09:43.840 "copy": false, 00:09:43.840 "nvme_iov_md": false 00:09:43.840 }, 00:09:43.840 "memory_domains": [ 00:09:43.840 { 00:09:43.840 "dma_device_id": "system", 00:09:43.840 "dma_device_type": 1 00:09:43.840 }, 00:09:43.840 { 00:09:43.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.840 "dma_device_type": 2 00:09:43.840 }, 00:09:43.840 { 00:09:43.840 "dma_device_id": "system", 00:09:43.840 "dma_device_type": 1 00:09:43.840 }, 00:09:43.840 { 00:09:43.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.840 "dma_device_type": 2 00:09:43.840 }, 00:09:43.840 { 00:09:43.840 "dma_device_id": "system", 00:09:43.840 "dma_device_type": 1 00:09:43.840 }, 00:09:43.840 { 00:09:43.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.840 "dma_device_type": 2 00:09:43.840 } 00:09:43.840 ], 00:09:43.840 "driver_specific": { 00:09:43.840 "raid": { 00:09:43.840 "uuid": "57c259d2-ddec-4562-9b78-4503f9ea7fb6", 00:09:43.840 "strip_size_kb": 64, 00:09:43.840 "state": "online", 00:09:43.840 "raid_level": "concat", 00:09:43.840 "superblock": false, 00:09:43.840 "num_base_bdevs": 3, 00:09:43.840 "num_base_bdevs_discovered": 3, 00:09:43.840 "num_base_bdevs_operational": 3, 00:09:43.840 "base_bdevs_list": [ 00:09:43.840 { 00:09:43.840 "name": "NewBaseBdev", 00:09:43.840 "uuid": "63b589da-4023-4854-9d9f-b33eb74cd2fc", 00:09:43.840 "is_configured": true, 00:09:43.840 "data_offset": 0, 00:09:43.840 "data_size": 65536 00:09:43.840 }, 00:09:43.840 { 00:09:43.840 "name": "BaseBdev2", 00:09:43.840 "uuid": "0386c70f-5b0e-4ccf-91a8-a9bdd9a84ba2", 00:09:43.840 "is_configured": true, 00:09:43.840 "data_offset": 0, 00:09:43.840 "data_size": 65536 00:09:43.840 }, 00:09:43.840 { 00:09:43.840 "name": "BaseBdev3", 00:09:43.840 "uuid": "df3073a0-dc4c-4220-bb4c-287bc6c822bf", 00:09:43.840 "is_configured": true, 00:09:43.840 "data_offset": 0, 00:09:43.840 "data_size": 65536 00:09:43.840 } 00:09:43.840 ] 00:09:43.840 } 00:09:43.840 } 00:09:43.840 }' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:43.840 BaseBdev2 00:09:43.840 BaseBdev3' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.840 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.100 [2024-12-13 08:20:56.260122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.100 [2024-12-13 08:20:56.260217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.100 [2024-12-13 08:20:56.260339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.100 [2024-12-13 08:20:56.260431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.100 [2024-12-13 08:20:56.260484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65765 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65765 ']' 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65765 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65765 00:09:44.100 killing process with pid 65765 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65765' 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65765 00:09:44.100 [2024-12-13 08:20:56.293435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.100 08:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65765 00:09:44.358 [2024-12-13 08:20:56.606003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:45.736 00:09:45.736 real 0m10.687s 00:09:45.736 user 0m16.910s 00:09:45.736 sys 0m1.913s 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.736 ************************************ 00:09:45.736 END TEST raid_state_function_test 00:09:45.736 ************************************ 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.736 08:20:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:45.736 08:20:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:45.736 08:20:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.736 08:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.736 ************************************ 00:09:45.736 START TEST raid_state_function_test_sb 00:09:45.736 ************************************ 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66386 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:45.736 Process raid pid: 66386 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66386' 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66386 00:09:45.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66386 ']' 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.736 08:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.736 [2024-12-13 08:20:57.915238] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:45.736 [2024-12-13 08:20:57.915445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.736 [2024-12-13 08:20:58.088985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.995 [2024-12-13 08:20:58.209552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.254 [2024-12-13 08:20:58.411332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.254 [2024-12-13 08:20:58.411471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.514 [2024-12-13 08:20:58.772230] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.514 [2024-12-13 08:20:58.772361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.514 [2024-12-13 08:20:58.772392] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.514 [2024-12-13 08:20:58.772418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.514 [2024-12-13 08:20:58.772437] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.514 [2024-12-13 08:20:58.772461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.514 "name": "Existed_Raid", 00:09:46.514 "uuid": "cf6611ac-54a2-410d-9af2-905e38da032f", 00:09:46.514 "strip_size_kb": 64, 00:09:46.514 "state": "configuring", 00:09:46.514 "raid_level": "concat", 00:09:46.514 "superblock": true, 00:09:46.514 "num_base_bdevs": 3, 00:09:46.514 "num_base_bdevs_discovered": 0, 00:09:46.514 "num_base_bdevs_operational": 3, 00:09:46.514 "base_bdevs_list": [ 00:09:46.514 { 00:09:46.514 "name": "BaseBdev1", 00:09:46.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.514 "is_configured": false, 00:09:46.514 "data_offset": 0, 00:09:46.514 "data_size": 0 00:09:46.514 }, 00:09:46.514 { 00:09:46.514 "name": "BaseBdev2", 00:09:46.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.514 "is_configured": false, 00:09:46.514 "data_offset": 0, 00:09:46.514 "data_size": 0 00:09:46.514 }, 00:09:46.514 { 00:09:46.514 "name": "BaseBdev3", 00:09:46.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.514 "is_configured": false, 00:09:46.514 "data_offset": 0, 00:09:46.514 "data_size": 0 00:09:46.514 } 00:09:46.514 ] 00:09:46.514 }' 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.514 08:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.083 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.083 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.083 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.083 [2024-12-13 08:20:59.215376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.083 [2024-12-13 08:20:59.215472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:47.083 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.083 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.083 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.083 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.083 [2024-12-13 08:20:59.227375] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.084 [2024-12-13 08:20:59.227465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.084 [2024-12-13 08:20:59.227494] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.084 [2024-12-13 08:20:59.227518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.084 [2024-12-13 08:20:59.227536] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.084 [2024-12-13 08:20:59.227558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.084 [2024-12-13 08:20:59.274578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.084 BaseBdev1 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.084 [ 00:09:47.084 { 00:09:47.084 "name": "BaseBdev1", 00:09:47.084 "aliases": [ 00:09:47.084 "4df42188-8583-4e74-b85c-20585a936312" 00:09:47.084 ], 00:09:47.084 "product_name": "Malloc disk", 00:09:47.084 "block_size": 512, 00:09:47.084 "num_blocks": 65536, 00:09:47.084 "uuid": "4df42188-8583-4e74-b85c-20585a936312", 00:09:47.084 "assigned_rate_limits": { 00:09:47.084 "rw_ios_per_sec": 0, 00:09:47.084 "rw_mbytes_per_sec": 0, 00:09:47.084 "r_mbytes_per_sec": 0, 00:09:47.084 "w_mbytes_per_sec": 0 00:09:47.084 }, 00:09:47.084 "claimed": true, 00:09:47.084 "claim_type": "exclusive_write", 00:09:47.084 "zoned": false, 00:09:47.084 "supported_io_types": { 00:09:47.084 "read": true, 00:09:47.084 "write": true, 00:09:47.084 "unmap": true, 00:09:47.084 "flush": true, 00:09:47.084 "reset": true, 00:09:47.084 "nvme_admin": false, 00:09:47.084 "nvme_io": false, 00:09:47.084 "nvme_io_md": false, 00:09:47.084 "write_zeroes": true, 00:09:47.084 "zcopy": true, 00:09:47.084 "get_zone_info": false, 00:09:47.084 "zone_management": false, 00:09:47.084 "zone_append": false, 00:09:47.084 "compare": false, 00:09:47.084 "compare_and_write": false, 00:09:47.084 "abort": true, 00:09:47.084 "seek_hole": false, 00:09:47.084 "seek_data": false, 00:09:47.084 "copy": true, 00:09:47.084 "nvme_iov_md": false 00:09:47.084 }, 00:09:47.084 "memory_domains": [ 00:09:47.084 { 00:09:47.084 "dma_device_id": "system", 00:09:47.084 "dma_device_type": 1 00:09:47.084 }, 00:09:47.084 { 00:09:47.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.084 "dma_device_type": 2 00:09:47.084 } 00:09:47.084 ], 00:09:47.084 "driver_specific": {} 00:09:47.084 } 00:09:47.084 ] 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.084 "name": "Existed_Raid", 00:09:47.084 "uuid": "cd7d4aee-df0b-4431-97f2-0e8ff59f57d4", 00:09:47.084 "strip_size_kb": 64, 00:09:47.084 "state": "configuring", 00:09:47.084 "raid_level": "concat", 00:09:47.084 "superblock": true, 00:09:47.084 "num_base_bdevs": 3, 00:09:47.084 "num_base_bdevs_discovered": 1, 00:09:47.084 "num_base_bdevs_operational": 3, 00:09:47.084 "base_bdevs_list": [ 00:09:47.084 { 00:09:47.084 "name": "BaseBdev1", 00:09:47.084 "uuid": "4df42188-8583-4e74-b85c-20585a936312", 00:09:47.084 "is_configured": true, 00:09:47.084 "data_offset": 2048, 00:09:47.084 "data_size": 63488 00:09:47.084 }, 00:09:47.084 { 00:09:47.084 "name": "BaseBdev2", 00:09:47.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.084 "is_configured": false, 00:09:47.084 "data_offset": 0, 00:09:47.084 "data_size": 0 00:09:47.084 }, 00:09:47.084 { 00:09:47.084 "name": "BaseBdev3", 00:09:47.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.084 "is_configured": false, 00:09:47.084 "data_offset": 0, 00:09:47.084 "data_size": 0 00:09:47.084 } 00:09:47.084 ] 00:09:47.084 }' 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.084 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.653 [2024-12-13 08:20:59.757813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.653 [2024-12-13 08:20:59.757937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.653 [2024-12-13 08:20:59.769852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.653 [2024-12-13 08:20:59.771807] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.653 [2024-12-13 08:20:59.771891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.653 [2024-12-13 08:20:59.771921] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.653 [2024-12-13 08:20:59.771944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.653 "name": "Existed_Raid", 00:09:47.653 "uuid": "2d94fa37-9210-4e06-af87-4172a5ebfbc0", 00:09:47.653 "strip_size_kb": 64, 00:09:47.653 "state": "configuring", 00:09:47.653 "raid_level": "concat", 00:09:47.653 "superblock": true, 00:09:47.653 "num_base_bdevs": 3, 00:09:47.653 "num_base_bdevs_discovered": 1, 00:09:47.653 "num_base_bdevs_operational": 3, 00:09:47.653 "base_bdevs_list": [ 00:09:47.653 { 00:09:47.653 "name": "BaseBdev1", 00:09:47.653 "uuid": "4df42188-8583-4e74-b85c-20585a936312", 00:09:47.653 "is_configured": true, 00:09:47.653 "data_offset": 2048, 00:09:47.653 "data_size": 63488 00:09:47.653 }, 00:09:47.653 { 00:09:47.653 "name": "BaseBdev2", 00:09:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.653 "is_configured": false, 00:09:47.653 "data_offset": 0, 00:09:47.653 "data_size": 0 00:09:47.653 }, 00:09:47.653 { 00:09:47.653 "name": "BaseBdev3", 00:09:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.653 "is_configured": false, 00:09:47.653 "data_offset": 0, 00:09:47.653 "data_size": 0 00:09:47.653 } 00:09:47.653 ] 00:09:47.653 }' 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.653 08:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.914 [2024-12-13 08:21:00.212032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.914 BaseBdev2 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.914 [ 00:09:47.914 { 00:09:47.914 "name": "BaseBdev2", 00:09:47.914 "aliases": [ 00:09:47.914 "ee4e4e53-f10b-4941-8a17-9654d9bc5ffa" 00:09:47.914 ], 00:09:47.914 "product_name": "Malloc disk", 00:09:47.914 "block_size": 512, 00:09:47.914 "num_blocks": 65536, 00:09:47.914 "uuid": "ee4e4e53-f10b-4941-8a17-9654d9bc5ffa", 00:09:47.914 "assigned_rate_limits": { 00:09:47.914 "rw_ios_per_sec": 0, 00:09:47.914 "rw_mbytes_per_sec": 0, 00:09:47.914 "r_mbytes_per_sec": 0, 00:09:47.914 "w_mbytes_per_sec": 0 00:09:47.914 }, 00:09:47.914 "claimed": true, 00:09:47.914 "claim_type": "exclusive_write", 00:09:47.914 "zoned": false, 00:09:47.914 "supported_io_types": { 00:09:47.914 "read": true, 00:09:47.914 "write": true, 00:09:47.914 "unmap": true, 00:09:47.914 "flush": true, 00:09:47.914 "reset": true, 00:09:47.914 "nvme_admin": false, 00:09:47.914 "nvme_io": false, 00:09:47.914 "nvme_io_md": false, 00:09:47.914 "write_zeroes": true, 00:09:47.914 "zcopy": true, 00:09:47.914 "get_zone_info": false, 00:09:47.914 "zone_management": false, 00:09:47.914 "zone_append": false, 00:09:47.914 "compare": false, 00:09:47.914 "compare_and_write": false, 00:09:47.914 "abort": true, 00:09:47.914 "seek_hole": false, 00:09:47.914 "seek_data": false, 00:09:47.914 "copy": true, 00:09:47.914 "nvme_iov_md": false 00:09:47.914 }, 00:09:47.914 "memory_domains": [ 00:09:47.914 { 00:09:47.914 "dma_device_id": "system", 00:09:47.914 "dma_device_type": 1 00:09:47.914 }, 00:09:47.914 { 00:09:47.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.914 "dma_device_type": 2 00:09:47.914 } 00:09:47.914 ], 00:09:47.914 "driver_specific": {} 00:09:47.914 } 00:09:47.914 ] 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.914 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.173 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.173 "name": "Existed_Raid", 00:09:48.173 "uuid": "2d94fa37-9210-4e06-af87-4172a5ebfbc0", 00:09:48.173 "strip_size_kb": 64, 00:09:48.173 "state": "configuring", 00:09:48.173 "raid_level": "concat", 00:09:48.173 "superblock": true, 00:09:48.173 "num_base_bdevs": 3, 00:09:48.173 "num_base_bdevs_discovered": 2, 00:09:48.173 "num_base_bdevs_operational": 3, 00:09:48.173 "base_bdevs_list": [ 00:09:48.173 { 00:09:48.173 "name": "BaseBdev1", 00:09:48.173 "uuid": "4df42188-8583-4e74-b85c-20585a936312", 00:09:48.173 "is_configured": true, 00:09:48.173 "data_offset": 2048, 00:09:48.173 "data_size": 63488 00:09:48.173 }, 00:09:48.173 { 00:09:48.173 "name": "BaseBdev2", 00:09:48.173 "uuid": "ee4e4e53-f10b-4941-8a17-9654d9bc5ffa", 00:09:48.173 "is_configured": true, 00:09:48.173 "data_offset": 2048, 00:09:48.173 "data_size": 63488 00:09:48.173 }, 00:09:48.173 { 00:09:48.173 "name": "BaseBdev3", 00:09:48.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.173 "is_configured": false, 00:09:48.173 "data_offset": 0, 00:09:48.173 "data_size": 0 00:09:48.173 } 00:09:48.173 ] 00:09:48.173 }' 00:09:48.173 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.173 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.433 [2024-12-13 08:21:00.726805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.433 [2024-12-13 08:21:00.727234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:48.433 [2024-12-13 08:21:00.727305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:48.433 [2024-12-13 08:21:00.727635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:48.433 BaseBdev3 00:09:48.433 [2024-12-13 08:21:00.727873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:48.433 [2024-12-13 08:21:00.727888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:48.433 [2024-12-13 08:21:00.728060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.433 [ 00:09:48.433 { 00:09:48.433 "name": "BaseBdev3", 00:09:48.433 "aliases": [ 00:09:48.433 "609a8c24-c20c-4171-bfb0-6fe408f3fbc9" 00:09:48.433 ], 00:09:48.433 "product_name": "Malloc disk", 00:09:48.433 "block_size": 512, 00:09:48.433 "num_blocks": 65536, 00:09:48.433 "uuid": "609a8c24-c20c-4171-bfb0-6fe408f3fbc9", 00:09:48.433 "assigned_rate_limits": { 00:09:48.433 "rw_ios_per_sec": 0, 00:09:48.433 "rw_mbytes_per_sec": 0, 00:09:48.433 "r_mbytes_per_sec": 0, 00:09:48.433 "w_mbytes_per_sec": 0 00:09:48.433 }, 00:09:48.433 "claimed": true, 00:09:48.433 "claim_type": "exclusive_write", 00:09:48.433 "zoned": false, 00:09:48.433 "supported_io_types": { 00:09:48.433 "read": true, 00:09:48.433 "write": true, 00:09:48.433 "unmap": true, 00:09:48.433 "flush": true, 00:09:48.433 "reset": true, 00:09:48.433 "nvme_admin": false, 00:09:48.433 "nvme_io": false, 00:09:48.433 "nvme_io_md": false, 00:09:48.433 "write_zeroes": true, 00:09:48.433 "zcopy": true, 00:09:48.433 "get_zone_info": false, 00:09:48.433 "zone_management": false, 00:09:48.433 "zone_append": false, 00:09:48.433 "compare": false, 00:09:48.433 "compare_and_write": false, 00:09:48.433 "abort": true, 00:09:48.433 "seek_hole": false, 00:09:48.433 "seek_data": false, 00:09:48.433 "copy": true, 00:09:48.433 "nvme_iov_md": false 00:09:48.433 }, 00:09:48.433 "memory_domains": [ 00:09:48.433 { 00:09:48.433 "dma_device_id": "system", 00:09:48.433 "dma_device_type": 1 00:09:48.433 }, 00:09:48.433 { 00:09:48.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.433 "dma_device_type": 2 00:09:48.433 } 00:09:48.433 ], 00:09:48.433 "driver_specific": {} 00:09:48.433 } 00:09:48.433 ] 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.433 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.693 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.693 "name": "Existed_Raid", 00:09:48.693 "uuid": "2d94fa37-9210-4e06-af87-4172a5ebfbc0", 00:09:48.693 "strip_size_kb": 64, 00:09:48.693 "state": "online", 00:09:48.693 "raid_level": "concat", 00:09:48.693 "superblock": true, 00:09:48.693 "num_base_bdevs": 3, 00:09:48.693 "num_base_bdevs_discovered": 3, 00:09:48.693 "num_base_bdevs_operational": 3, 00:09:48.693 "base_bdevs_list": [ 00:09:48.693 { 00:09:48.693 "name": "BaseBdev1", 00:09:48.693 "uuid": "4df42188-8583-4e74-b85c-20585a936312", 00:09:48.693 "is_configured": true, 00:09:48.693 "data_offset": 2048, 00:09:48.693 "data_size": 63488 00:09:48.693 }, 00:09:48.693 { 00:09:48.693 "name": "BaseBdev2", 00:09:48.693 "uuid": "ee4e4e53-f10b-4941-8a17-9654d9bc5ffa", 00:09:48.693 "is_configured": true, 00:09:48.693 "data_offset": 2048, 00:09:48.693 "data_size": 63488 00:09:48.693 }, 00:09:48.693 { 00:09:48.693 "name": "BaseBdev3", 00:09:48.693 "uuid": "609a8c24-c20c-4171-bfb0-6fe408f3fbc9", 00:09:48.693 "is_configured": true, 00:09:48.693 "data_offset": 2048, 00:09:48.693 "data_size": 63488 00:09:48.693 } 00:09:48.693 ] 00:09:48.693 }' 00:09:48.693 08:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.693 08:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.953 [2024-12-13 08:21:01.202368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.953 "name": "Existed_Raid", 00:09:48.953 "aliases": [ 00:09:48.953 "2d94fa37-9210-4e06-af87-4172a5ebfbc0" 00:09:48.953 ], 00:09:48.953 "product_name": "Raid Volume", 00:09:48.953 "block_size": 512, 00:09:48.953 "num_blocks": 190464, 00:09:48.953 "uuid": "2d94fa37-9210-4e06-af87-4172a5ebfbc0", 00:09:48.953 "assigned_rate_limits": { 00:09:48.953 "rw_ios_per_sec": 0, 00:09:48.953 "rw_mbytes_per_sec": 0, 00:09:48.953 "r_mbytes_per_sec": 0, 00:09:48.953 "w_mbytes_per_sec": 0 00:09:48.953 }, 00:09:48.953 "claimed": false, 00:09:48.953 "zoned": false, 00:09:48.953 "supported_io_types": { 00:09:48.953 "read": true, 00:09:48.953 "write": true, 00:09:48.953 "unmap": true, 00:09:48.953 "flush": true, 00:09:48.953 "reset": true, 00:09:48.953 "nvme_admin": false, 00:09:48.953 "nvme_io": false, 00:09:48.953 "nvme_io_md": false, 00:09:48.953 "write_zeroes": true, 00:09:48.953 "zcopy": false, 00:09:48.953 "get_zone_info": false, 00:09:48.953 "zone_management": false, 00:09:48.953 "zone_append": false, 00:09:48.953 "compare": false, 00:09:48.953 "compare_and_write": false, 00:09:48.953 "abort": false, 00:09:48.953 "seek_hole": false, 00:09:48.953 "seek_data": false, 00:09:48.953 "copy": false, 00:09:48.953 "nvme_iov_md": false 00:09:48.953 }, 00:09:48.953 "memory_domains": [ 00:09:48.953 { 00:09:48.953 "dma_device_id": "system", 00:09:48.953 "dma_device_type": 1 00:09:48.953 }, 00:09:48.953 { 00:09:48.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.953 "dma_device_type": 2 00:09:48.953 }, 00:09:48.953 { 00:09:48.953 "dma_device_id": "system", 00:09:48.953 "dma_device_type": 1 00:09:48.953 }, 00:09:48.953 { 00:09:48.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.953 "dma_device_type": 2 00:09:48.953 }, 00:09:48.953 { 00:09:48.953 "dma_device_id": "system", 00:09:48.953 "dma_device_type": 1 00:09:48.953 }, 00:09:48.953 { 00:09:48.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.953 "dma_device_type": 2 00:09:48.953 } 00:09:48.953 ], 00:09:48.953 "driver_specific": { 00:09:48.953 "raid": { 00:09:48.953 "uuid": "2d94fa37-9210-4e06-af87-4172a5ebfbc0", 00:09:48.953 "strip_size_kb": 64, 00:09:48.953 "state": "online", 00:09:48.953 "raid_level": "concat", 00:09:48.953 "superblock": true, 00:09:48.953 "num_base_bdevs": 3, 00:09:48.953 "num_base_bdevs_discovered": 3, 00:09:48.953 "num_base_bdevs_operational": 3, 00:09:48.953 "base_bdevs_list": [ 00:09:48.953 { 00:09:48.953 "name": "BaseBdev1", 00:09:48.953 "uuid": "4df42188-8583-4e74-b85c-20585a936312", 00:09:48.953 "is_configured": true, 00:09:48.953 "data_offset": 2048, 00:09:48.953 "data_size": 63488 00:09:48.953 }, 00:09:48.953 { 00:09:48.953 "name": "BaseBdev2", 00:09:48.953 "uuid": "ee4e4e53-f10b-4941-8a17-9654d9bc5ffa", 00:09:48.953 "is_configured": true, 00:09:48.953 "data_offset": 2048, 00:09:48.953 "data_size": 63488 00:09:48.953 }, 00:09:48.953 { 00:09:48.953 "name": "BaseBdev3", 00:09:48.953 "uuid": "609a8c24-c20c-4171-bfb0-6fe408f3fbc9", 00:09:48.953 "is_configured": true, 00:09:48.953 "data_offset": 2048, 00:09:48.953 "data_size": 63488 00:09:48.953 } 00:09:48.953 ] 00:09:48.953 } 00:09:48.953 } 00:09:48.953 }' 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.953 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:48.953 BaseBdev2 00:09:48.953 BaseBdev3' 00:09:48.954 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.214 [2024-12-13 08:21:01.457655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.214 [2024-12-13 08:21:01.457723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.214 [2024-12-13 08:21:01.457802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.214 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.473 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.473 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.473 "name": "Existed_Raid", 00:09:49.473 "uuid": "2d94fa37-9210-4e06-af87-4172a5ebfbc0", 00:09:49.473 "strip_size_kb": 64, 00:09:49.473 "state": "offline", 00:09:49.473 "raid_level": "concat", 00:09:49.473 "superblock": true, 00:09:49.473 "num_base_bdevs": 3, 00:09:49.473 "num_base_bdevs_discovered": 2, 00:09:49.473 "num_base_bdevs_operational": 2, 00:09:49.473 "base_bdevs_list": [ 00:09:49.473 { 00:09:49.473 "name": null, 00:09:49.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.473 "is_configured": false, 00:09:49.473 "data_offset": 0, 00:09:49.473 "data_size": 63488 00:09:49.473 }, 00:09:49.473 { 00:09:49.473 "name": "BaseBdev2", 00:09:49.473 "uuid": "ee4e4e53-f10b-4941-8a17-9654d9bc5ffa", 00:09:49.473 "is_configured": true, 00:09:49.473 "data_offset": 2048, 00:09:49.473 "data_size": 63488 00:09:49.473 }, 00:09:49.473 { 00:09:49.473 "name": "BaseBdev3", 00:09:49.473 "uuid": "609a8c24-c20c-4171-bfb0-6fe408f3fbc9", 00:09:49.473 "is_configured": true, 00:09:49.473 "data_offset": 2048, 00:09:49.473 "data_size": 63488 00:09:49.473 } 00:09:49.473 ] 00:09:49.473 }' 00:09:49.473 08:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.473 08:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.732 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.733 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.733 [2024-12-13 08:21:02.084632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.992 [2024-12-13 08:21:02.244698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.992 [2024-12-13 08:21:02.244803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.992 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 BaseBdev2 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 [ 00:09:50.253 { 00:09:50.253 "name": "BaseBdev2", 00:09:50.253 "aliases": [ 00:09:50.253 "258238a6-e2da-433b-b8ca-cc5bc68f1591" 00:09:50.253 ], 00:09:50.253 "product_name": "Malloc disk", 00:09:50.253 "block_size": 512, 00:09:50.253 "num_blocks": 65536, 00:09:50.253 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:50.253 "assigned_rate_limits": { 00:09:50.253 "rw_ios_per_sec": 0, 00:09:50.253 "rw_mbytes_per_sec": 0, 00:09:50.253 "r_mbytes_per_sec": 0, 00:09:50.253 "w_mbytes_per_sec": 0 00:09:50.253 }, 00:09:50.253 "claimed": false, 00:09:50.253 "zoned": false, 00:09:50.253 "supported_io_types": { 00:09:50.253 "read": true, 00:09:50.253 "write": true, 00:09:50.253 "unmap": true, 00:09:50.253 "flush": true, 00:09:50.253 "reset": true, 00:09:50.253 "nvme_admin": false, 00:09:50.253 "nvme_io": false, 00:09:50.253 "nvme_io_md": false, 00:09:50.253 "write_zeroes": true, 00:09:50.253 "zcopy": true, 00:09:50.253 "get_zone_info": false, 00:09:50.253 "zone_management": false, 00:09:50.253 "zone_append": false, 00:09:50.253 "compare": false, 00:09:50.253 "compare_and_write": false, 00:09:50.253 "abort": true, 00:09:50.253 "seek_hole": false, 00:09:50.253 "seek_data": false, 00:09:50.253 "copy": true, 00:09:50.253 "nvme_iov_md": false 00:09:50.253 }, 00:09:50.253 "memory_domains": [ 00:09:50.253 { 00:09:50.253 "dma_device_id": "system", 00:09:50.253 "dma_device_type": 1 00:09:50.253 }, 00:09:50.253 { 00:09:50.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.253 "dma_device_type": 2 00:09:50.253 } 00:09:50.253 ], 00:09:50.253 "driver_specific": {} 00:09:50.253 } 00:09:50.253 ] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 BaseBdev3 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 [ 00:09:50.253 { 00:09:50.253 "name": "BaseBdev3", 00:09:50.253 "aliases": [ 00:09:50.253 "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996" 00:09:50.253 ], 00:09:50.253 "product_name": "Malloc disk", 00:09:50.253 "block_size": 512, 00:09:50.253 "num_blocks": 65536, 00:09:50.253 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:50.253 "assigned_rate_limits": { 00:09:50.253 "rw_ios_per_sec": 0, 00:09:50.253 "rw_mbytes_per_sec": 0, 00:09:50.253 "r_mbytes_per_sec": 0, 00:09:50.253 "w_mbytes_per_sec": 0 00:09:50.253 }, 00:09:50.253 "claimed": false, 00:09:50.253 "zoned": false, 00:09:50.253 "supported_io_types": { 00:09:50.253 "read": true, 00:09:50.253 "write": true, 00:09:50.253 "unmap": true, 00:09:50.253 "flush": true, 00:09:50.253 "reset": true, 00:09:50.253 "nvme_admin": false, 00:09:50.253 "nvme_io": false, 00:09:50.253 "nvme_io_md": false, 00:09:50.253 "write_zeroes": true, 00:09:50.253 "zcopy": true, 00:09:50.253 "get_zone_info": false, 00:09:50.253 "zone_management": false, 00:09:50.253 "zone_append": false, 00:09:50.253 "compare": false, 00:09:50.253 "compare_and_write": false, 00:09:50.253 "abort": true, 00:09:50.253 "seek_hole": false, 00:09:50.253 "seek_data": false, 00:09:50.253 "copy": true, 00:09:50.253 "nvme_iov_md": false 00:09:50.253 }, 00:09:50.253 "memory_domains": [ 00:09:50.253 { 00:09:50.253 "dma_device_id": "system", 00:09:50.253 "dma_device_type": 1 00:09:50.253 }, 00:09:50.253 { 00:09:50.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.253 "dma_device_type": 2 00:09:50.253 } 00:09:50.253 ], 00:09:50.253 "driver_specific": {} 00:09:50.253 } 00:09:50.253 ] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.253 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.253 [2024-12-13 08:21:02.581652] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.254 [2024-12-13 08:21:02.581768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.254 [2024-12-13 08:21:02.581820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.254 [2024-12-13 08:21:02.583906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.254 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.513 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.513 "name": "Existed_Raid", 00:09:50.513 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:50.513 "strip_size_kb": 64, 00:09:50.513 "state": "configuring", 00:09:50.514 "raid_level": "concat", 00:09:50.514 "superblock": true, 00:09:50.514 "num_base_bdevs": 3, 00:09:50.514 "num_base_bdevs_discovered": 2, 00:09:50.514 "num_base_bdevs_operational": 3, 00:09:50.514 "base_bdevs_list": [ 00:09:50.514 { 00:09:50.514 "name": "BaseBdev1", 00:09:50.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.514 "is_configured": false, 00:09:50.514 "data_offset": 0, 00:09:50.514 "data_size": 0 00:09:50.514 }, 00:09:50.514 { 00:09:50.514 "name": "BaseBdev2", 00:09:50.514 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:50.514 "is_configured": true, 00:09:50.514 "data_offset": 2048, 00:09:50.514 "data_size": 63488 00:09:50.514 }, 00:09:50.514 { 00:09:50.514 "name": "BaseBdev3", 00:09:50.514 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:50.514 "is_configured": true, 00:09:50.514 "data_offset": 2048, 00:09:50.514 "data_size": 63488 00:09:50.514 } 00:09:50.514 ] 00:09:50.514 }' 00:09:50.514 08:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.514 08:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.774 [2024-12-13 08:21:03.028881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.774 "name": "Existed_Raid", 00:09:50.774 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:50.774 "strip_size_kb": 64, 00:09:50.774 "state": "configuring", 00:09:50.774 "raid_level": "concat", 00:09:50.774 "superblock": true, 00:09:50.774 "num_base_bdevs": 3, 00:09:50.774 "num_base_bdevs_discovered": 1, 00:09:50.774 "num_base_bdevs_operational": 3, 00:09:50.774 "base_bdevs_list": [ 00:09:50.774 { 00:09:50.774 "name": "BaseBdev1", 00:09:50.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.774 "is_configured": false, 00:09:50.774 "data_offset": 0, 00:09:50.774 "data_size": 0 00:09:50.774 }, 00:09:50.774 { 00:09:50.774 "name": null, 00:09:50.774 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:50.774 "is_configured": false, 00:09:50.774 "data_offset": 0, 00:09:50.774 "data_size": 63488 00:09:50.774 }, 00:09:50.774 { 00:09:50.774 "name": "BaseBdev3", 00:09:50.774 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:50.774 "is_configured": true, 00:09:50.774 "data_offset": 2048, 00:09:50.774 "data_size": 63488 00:09:50.774 } 00:09:50.774 ] 00:09:50.774 }' 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.774 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.344 [2024-12-13 08:21:03.590178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.344 BaseBdev1 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.344 [ 00:09:51.344 { 00:09:51.344 "name": "BaseBdev1", 00:09:51.344 "aliases": [ 00:09:51.344 "29299562-ea8f-4c3b-b2ed-bded01c85753" 00:09:51.344 ], 00:09:51.344 "product_name": "Malloc disk", 00:09:51.344 "block_size": 512, 00:09:51.344 "num_blocks": 65536, 00:09:51.344 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:51.344 "assigned_rate_limits": { 00:09:51.344 "rw_ios_per_sec": 0, 00:09:51.344 "rw_mbytes_per_sec": 0, 00:09:51.344 "r_mbytes_per_sec": 0, 00:09:51.344 "w_mbytes_per_sec": 0 00:09:51.344 }, 00:09:51.344 "claimed": true, 00:09:51.344 "claim_type": "exclusive_write", 00:09:51.344 "zoned": false, 00:09:51.344 "supported_io_types": { 00:09:51.344 "read": true, 00:09:51.344 "write": true, 00:09:51.344 "unmap": true, 00:09:51.344 "flush": true, 00:09:51.344 "reset": true, 00:09:51.344 "nvme_admin": false, 00:09:51.344 "nvme_io": false, 00:09:51.344 "nvme_io_md": false, 00:09:51.344 "write_zeroes": true, 00:09:51.344 "zcopy": true, 00:09:51.344 "get_zone_info": false, 00:09:51.344 "zone_management": false, 00:09:51.344 "zone_append": false, 00:09:51.344 "compare": false, 00:09:51.344 "compare_and_write": false, 00:09:51.344 "abort": true, 00:09:51.344 "seek_hole": false, 00:09:51.344 "seek_data": false, 00:09:51.344 "copy": true, 00:09:51.344 "nvme_iov_md": false 00:09:51.344 }, 00:09:51.344 "memory_domains": [ 00:09:51.344 { 00:09:51.344 "dma_device_id": "system", 00:09:51.344 "dma_device_type": 1 00:09:51.344 }, 00:09:51.344 { 00:09:51.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.344 "dma_device_type": 2 00:09:51.344 } 00:09:51.344 ], 00:09:51.344 "driver_specific": {} 00:09:51.344 } 00:09:51.344 ] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.344 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.344 "name": "Existed_Raid", 00:09:51.344 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:51.344 "strip_size_kb": 64, 00:09:51.344 "state": "configuring", 00:09:51.344 "raid_level": "concat", 00:09:51.344 "superblock": true, 00:09:51.344 "num_base_bdevs": 3, 00:09:51.344 "num_base_bdevs_discovered": 2, 00:09:51.344 "num_base_bdevs_operational": 3, 00:09:51.344 "base_bdevs_list": [ 00:09:51.344 { 00:09:51.344 "name": "BaseBdev1", 00:09:51.345 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:51.345 "is_configured": true, 00:09:51.345 "data_offset": 2048, 00:09:51.345 "data_size": 63488 00:09:51.345 }, 00:09:51.345 { 00:09:51.345 "name": null, 00:09:51.345 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:51.345 "is_configured": false, 00:09:51.345 "data_offset": 0, 00:09:51.345 "data_size": 63488 00:09:51.345 }, 00:09:51.345 { 00:09:51.345 "name": "BaseBdev3", 00:09:51.345 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:51.345 "is_configured": true, 00:09:51.345 "data_offset": 2048, 00:09:51.345 "data_size": 63488 00:09:51.345 } 00:09:51.345 ] 00:09:51.345 }' 00:09:51.345 08:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.345 08:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.914 [2024-12-13 08:21:04.105356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.914 "name": "Existed_Raid", 00:09:51.914 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:51.914 "strip_size_kb": 64, 00:09:51.914 "state": "configuring", 00:09:51.914 "raid_level": "concat", 00:09:51.914 "superblock": true, 00:09:51.914 "num_base_bdevs": 3, 00:09:51.914 "num_base_bdevs_discovered": 1, 00:09:51.914 "num_base_bdevs_operational": 3, 00:09:51.914 "base_bdevs_list": [ 00:09:51.914 { 00:09:51.914 "name": "BaseBdev1", 00:09:51.914 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:51.914 "is_configured": true, 00:09:51.914 "data_offset": 2048, 00:09:51.914 "data_size": 63488 00:09:51.914 }, 00:09:51.914 { 00:09:51.914 "name": null, 00:09:51.914 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:51.914 "is_configured": false, 00:09:51.914 "data_offset": 0, 00:09:51.914 "data_size": 63488 00:09:51.914 }, 00:09:51.914 { 00:09:51.914 "name": null, 00:09:51.914 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:51.914 "is_configured": false, 00:09:51.914 "data_offset": 0, 00:09:51.914 "data_size": 63488 00:09:51.914 } 00:09:51.914 ] 00:09:51.914 }' 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.914 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.502 [2024-12-13 08:21:04.656456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.502 "name": "Existed_Raid", 00:09:52.502 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:52.502 "strip_size_kb": 64, 00:09:52.502 "state": "configuring", 00:09:52.502 "raid_level": "concat", 00:09:52.502 "superblock": true, 00:09:52.502 "num_base_bdevs": 3, 00:09:52.502 "num_base_bdevs_discovered": 2, 00:09:52.502 "num_base_bdevs_operational": 3, 00:09:52.502 "base_bdevs_list": [ 00:09:52.502 { 00:09:52.502 "name": "BaseBdev1", 00:09:52.502 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:52.502 "is_configured": true, 00:09:52.502 "data_offset": 2048, 00:09:52.502 "data_size": 63488 00:09:52.502 }, 00:09:52.502 { 00:09:52.502 "name": null, 00:09:52.502 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:52.502 "is_configured": false, 00:09:52.502 "data_offset": 0, 00:09:52.502 "data_size": 63488 00:09:52.502 }, 00:09:52.502 { 00:09:52.502 "name": "BaseBdev3", 00:09:52.502 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:52.502 "is_configured": true, 00:09:52.502 "data_offset": 2048, 00:09:52.502 "data_size": 63488 00:09:52.502 } 00:09:52.502 ] 00:09:52.502 }' 00:09:52.502 08:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.503 08:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.762 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.762 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.762 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.762 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.762 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 [2024-12-13 08:21:05.151656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.021 "name": "Existed_Raid", 00:09:53.021 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:53.021 "strip_size_kb": 64, 00:09:53.021 "state": "configuring", 00:09:53.021 "raid_level": "concat", 00:09:53.021 "superblock": true, 00:09:53.021 "num_base_bdevs": 3, 00:09:53.021 "num_base_bdevs_discovered": 1, 00:09:53.021 "num_base_bdevs_operational": 3, 00:09:53.021 "base_bdevs_list": [ 00:09:53.021 { 00:09:53.021 "name": null, 00:09:53.021 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:53.021 "is_configured": false, 00:09:53.021 "data_offset": 0, 00:09:53.021 "data_size": 63488 00:09:53.021 }, 00:09:53.021 { 00:09:53.021 "name": null, 00:09:53.021 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:53.021 "is_configured": false, 00:09:53.021 "data_offset": 0, 00:09:53.021 "data_size": 63488 00:09:53.021 }, 00:09:53.021 { 00:09:53.021 "name": "BaseBdev3", 00:09:53.021 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:53.021 "is_configured": true, 00:09:53.021 "data_offset": 2048, 00:09:53.021 "data_size": 63488 00:09:53.021 } 00:09:53.021 ] 00:09:53.021 }' 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.021 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 [2024-12-13 08:21:05.789428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.590 "name": "Existed_Raid", 00:09:53.590 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:53.590 "strip_size_kb": 64, 00:09:53.590 "state": "configuring", 00:09:53.590 "raid_level": "concat", 00:09:53.590 "superblock": true, 00:09:53.590 "num_base_bdevs": 3, 00:09:53.590 "num_base_bdevs_discovered": 2, 00:09:53.590 "num_base_bdevs_operational": 3, 00:09:53.590 "base_bdevs_list": [ 00:09:53.590 { 00:09:53.590 "name": null, 00:09:53.590 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:53.590 "is_configured": false, 00:09:53.590 "data_offset": 0, 00:09:53.590 "data_size": 63488 00:09:53.590 }, 00:09:53.590 { 00:09:53.590 "name": "BaseBdev2", 00:09:53.590 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:53.590 "is_configured": true, 00:09:53.590 "data_offset": 2048, 00:09:53.590 "data_size": 63488 00:09:53.590 }, 00:09:53.590 { 00:09:53.590 "name": "BaseBdev3", 00:09:53.590 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:53.590 "is_configured": true, 00:09:53.590 "data_offset": 2048, 00:09:53.590 "data_size": 63488 00:09:53.590 } 00:09:53.590 ] 00:09:53.590 }' 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.590 08:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29299562-ea8f-4c3b-b2ed-bded01c85753 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 [2024-12-13 08:21:06.399043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:54.161 [2024-12-13 08:21:06.399394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:54.161 [2024-12-13 08:21:06.399452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:54.161 [2024-12-13 08:21:06.399769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:54.161 [2024-12-13 08:21:06.399991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:54.161 NewBaseBdev 00:09:54.161 [2024-12-13 08:21:06.400048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:54.161 [2024-12-13 08:21:06.400267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 [ 00:09:54.161 { 00:09:54.161 "name": "NewBaseBdev", 00:09:54.161 "aliases": [ 00:09:54.161 "29299562-ea8f-4c3b-b2ed-bded01c85753" 00:09:54.161 ], 00:09:54.161 "product_name": "Malloc disk", 00:09:54.161 "block_size": 512, 00:09:54.161 "num_blocks": 65536, 00:09:54.161 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:54.161 "assigned_rate_limits": { 00:09:54.161 "rw_ios_per_sec": 0, 00:09:54.161 "rw_mbytes_per_sec": 0, 00:09:54.161 "r_mbytes_per_sec": 0, 00:09:54.161 "w_mbytes_per_sec": 0 00:09:54.161 }, 00:09:54.161 "claimed": true, 00:09:54.161 "claim_type": "exclusive_write", 00:09:54.161 "zoned": false, 00:09:54.161 "supported_io_types": { 00:09:54.161 "read": true, 00:09:54.161 "write": true, 00:09:54.161 "unmap": true, 00:09:54.161 "flush": true, 00:09:54.161 "reset": true, 00:09:54.161 "nvme_admin": false, 00:09:54.161 "nvme_io": false, 00:09:54.161 "nvme_io_md": false, 00:09:54.161 "write_zeroes": true, 00:09:54.161 "zcopy": true, 00:09:54.161 "get_zone_info": false, 00:09:54.161 "zone_management": false, 00:09:54.161 "zone_append": false, 00:09:54.161 "compare": false, 00:09:54.161 "compare_and_write": false, 00:09:54.161 "abort": true, 00:09:54.161 "seek_hole": false, 00:09:54.161 "seek_data": false, 00:09:54.161 "copy": true, 00:09:54.161 "nvme_iov_md": false 00:09:54.161 }, 00:09:54.161 "memory_domains": [ 00:09:54.161 { 00:09:54.161 "dma_device_id": "system", 00:09:54.161 "dma_device_type": 1 00:09:54.161 }, 00:09:54.161 { 00:09:54.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.161 "dma_device_type": 2 00:09:54.161 } 00:09:54.161 ], 00:09:54.161 "driver_specific": {} 00:09:54.161 } 00:09:54.161 ] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.161 "name": "Existed_Raid", 00:09:54.161 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:54.161 "strip_size_kb": 64, 00:09:54.161 "state": "online", 00:09:54.161 "raid_level": "concat", 00:09:54.161 "superblock": true, 00:09:54.161 "num_base_bdevs": 3, 00:09:54.161 "num_base_bdevs_discovered": 3, 00:09:54.161 "num_base_bdevs_operational": 3, 00:09:54.161 "base_bdevs_list": [ 00:09:54.161 { 00:09:54.161 "name": "NewBaseBdev", 00:09:54.161 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:54.161 "is_configured": true, 00:09:54.161 "data_offset": 2048, 00:09:54.161 "data_size": 63488 00:09:54.161 }, 00:09:54.161 { 00:09:54.161 "name": "BaseBdev2", 00:09:54.161 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:54.161 "is_configured": true, 00:09:54.161 "data_offset": 2048, 00:09:54.161 "data_size": 63488 00:09:54.161 }, 00:09:54.161 { 00:09:54.161 "name": "BaseBdev3", 00:09:54.161 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:54.161 "is_configured": true, 00:09:54.161 "data_offset": 2048, 00:09:54.161 "data_size": 63488 00:09:54.161 } 00:09:54.161 ] 00:09:54.161 }' 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.161 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.731 [2024-12-13 08:21:06.926523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.731 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.731 "name": "Existed_Raid", 00:09:54.731 "aliases": [ 00:09:54.731 "08a601ab-1d1a-4a2b-a04f-00bfce6965d6" 00:09:54.731 ], 00:09:54.731 "product_name": "Raid Volume", 00:09:54.731 "block_size": 512, 00:09:54.731 "num_blocks": 190464, 00:09:54.731 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:54.731 "assigned_rate_limits": { 00:09:54.731 "rw_ios_per_sec": 0, 00:09:54.731 "rw_mbytes_per_sec": 0, 00:09:54.731 "r_mbytes_per_sec": 0, 00:09:54.731 "w_mbytes_per_sec": 0 00:09:54.731 }, 00:09:54.731 "claimed": false, 00:09:54.731 "zoned": false, 00:09:54.731 "supported_io_types": { 00:09:54.731 "read": true, 00:09:54.731 "write": true, 00:09:54.731 "unmap": true, 00:09:54.731 "flush": true, 00:09:54.731 "reset": true, 00:09:54.731 "nvme_admin": false, 00:09:54.731 "nvme_io": false, 00:09:54.731 "nvme_io_md": false, 00:09:54.731 "write_zeroes": true, 00:09:54.731 "zcopy": false, 00:09:54.731 "get_zone_info": false, 00:09:54.731 "zone_management": false, 00:09:54.731 "zone_append": false, 00:09:54.731 "compare": false, 00:09:54.731 "compare_and_write": false, 00:09:54.731 "abort": false, 00:09:54.731 "seek_hole": false, 00:09:54.731 "seek_data": false, 00:09:54.731 "copy": false, 00:09:54.731 "nvme_iov_md": false 00:09:54.731 }, 00:09:54.731 "memory_domains": [ 00:09:54.731 { 00:09:54.731 "dma_device_id": "system", 00:09:54.731 "dma_device_type": 1 00:09:54.731 }, 00:09:54.731 { 00:09:54.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.731 "dma_device_type": 2 00:09:54.731 }, 00:09:54.731 { 00:09:54.731 "dma_device_id": "system", 00:09:54.731 "dma_device_type": 1 00:09:54.731 }, 00:09:54.731 { 00:09:54.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.731 "dma_device_type": 2 00:09:54.731 }, 00:09:54.731 { 00:09:54.731 "dma_device_id": "system", 00:09:54.731 "dma_device_type": 1 00:09:54.731 }, 00:09:54.731 { 00:09:54.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.731 "dma_device_type": 2 00:09:54.731 } 00:09:54.731 ], 00:09:54.731 "driver_specific": { 00:09:54.732 "raid": { 00:09:54.732 "uuid": "08a601ab-1d1a-4a2b-a04f-00bfce6965d6", 00:09:54.732 "strip_size_kb": 64, 00:09:54.732 "state": "online", 00:09:54.732 "raid_level": "concat", 00:09:54.732 "superblock": true, 00:09:54.732 "num_base_bdevs": 3, 00:09:54.732 "num_base_bdevs_discovered": 3, 00:09:54.732 "num_base_bdevs_operational": 3, 00:09:54.732 "base_bdevs_list": [ 00:09:54.732 { 00:09:54.732 "name": "NewBaseBdev", 00:09:54.732 "uuid": "29299562-ea8f-4c3b-b2ed-bded01c85753", 00:09:54.732 "is_configured": true, 00:09:54.732 "data_offset": 2048, 00:09:54.732 "data_size": 63488 00:09:54.732 }, 00:09:54.732 { 00:09:54.732 "name": "BaseBdev2", 00:09:54.732 "uuid": "258238a6-e2da-433b-b8ca-cc5bc68f1591", 00:09:54.732 "is_configured": true, 00:09:54.732 "data_offset": 2048, 00:09:54.732 "data_size": 63488 00:09:54.732 }, 00:09:54.732 { 00:09:54.732 "name": "BaseBdev3", 00:09:54.732 "uuid": "2ca2a8c8-7c37-48d9-bb0a-1cb49fbba996", 00:09:54.732 "is_configured": true, 00:09:54.732 "data_offset": 2048, 00:09:54.732 "data_size": 63488 00:09:54.732 } 00:09:54.732 ] 00:09:54.732 } 00:09:54.732 } 00:09:54.732 }' 00:09:54.732 08:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:54.732 BaseBdev2 00:09:54.732 BaseBdev3' 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.732 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 [2024-12-13 08:21:07.237690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.992 [2024-12-13 08:21:07.237761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.992 [2024-12-13 08:21:07.237864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.992 [2024-12-13 08:21:07.237937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.992 [2024-12-13 08:21:07.237983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66386 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66386 ']' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66386 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66386 00:09:54.992 killing process with pid 66386 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66386' 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66386 00:09:54.992 [2024-12-13 08:21:07.289561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.992 08:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66386 00:09:55.251 [2024-12-13 08:21:07.606369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.632 08:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:56.632 00:09:56.632 real 0m10.972s 00:09:56.632 user 0m17.401s 00:09:56.632 sys 0m1.957s 00:09:56.632 ************************************ 00:09:56.632 END TEST raid_state_function_test_sb 00:09:56.632 ************************************ 00:09:56.632 08:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.632 08:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.632 08:21:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:56.632 08:21:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:56.632 08:21:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.632 08:21:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.632 ************************************ 00:09:56.632 START TEST raid_superblock_test 00:09:56.632 ************************************ 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67012 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67012 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67012 ']' 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.632 08:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.632 [2024-12-13 08:21:08.961571] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:56.632 [2024-12-13 08:21:08.961750] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67012 ] 00:09:56.891 [2024-12-13 08:21:09.118011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.891 [2024-12-13 08:21:09.237230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.151 [2024-12-13 08:21:09.447800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.151 [2024-12-13 08:21:09.447946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.721 malloc1 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.721 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 [2024-12-13 08:21:09.864447] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.722 [2024-12-13 08:21:09.864571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.722 [2024-12-13 08:21:09.864617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:57.722 [2024-12-13 08:21:09.864652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.722 [2024-12-13 08:21:09.867267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.722 [2024-12-13 08:21:09.867365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.722 pt1 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 malloc2 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 [2024-12-13 08:21:09.921745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.722 [2024-12-13 08:21:09.921866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.722 [2024-12-13 08:21:09.921941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:57.722 [2024-12-13 08:21:09.921981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.722 [2024-12-13 08:21:09.924502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.722 [2024-12-13 08:21:09.924580] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.722 pt2 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 malloc3 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 [2024-12-13 08:21:09.986165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.722 [2024-12-13 08:21:09.986271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.722 [2024-12-13 08:21:09.986325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:57.722 [2024-12-13 08:21:09.986361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.722 [2024-12-13 08:21:09.988719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.722 [2024-12-13 08:21:09.988796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.722 pt3 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.722 08:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 [2024-12-13 08:21:09.998206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.722 [2024-12-13 08:21:10.000315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.722 [2024-12-13 08:21:10.000438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.722 [2024-12-13 08:21:10.000659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:57.722 [2024-12-13 08:21:10.000714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:57.722 [2024-12-13 08:21:10.001019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:57.722 [2024-12-13 08:21:10.001241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:57.722 [2024-12-13 08:21:10.001285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:57.722 [2024-12-13 08:21:10.001492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.722 "name": "raid_bdev1", 00:09:57.722 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:09:57.722 "strip_size_kb": 64, 00:09:57.722 "state": "online", 00:09:57.722 "raid_level": "concat", 00:09:57.722 "superblock": true, 00:09:57.722 "num_base_bdevs": 3, 00:09:57.722 "num_base_bdevs_discovered": 3, 00:09:57.722 "num_base_bdevs_operational": 3, 00:09:57.722 "base_bdevs_list": [ 00:09:57.722 { 00:09:57.722 "name": "pt1", 00:09:57.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.722 "is_configured": true, 00:09:57.722 "data_offset": 2048, 00:09:57.722 "data_size": 63488 00:09:57.722 }, 00:09:57.722 { 00:09:57.722 "name": "pt2", 00:09:57.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.722 "is_configured": true, 00:09:57.722 "data_offset": 2048, 00:09:57.722 "data_size": 63488 00:09:57.722 }, 00:09:57.722 { 00:09:57.722 "name": "pt3", 00:09:57.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.722 "is_configured": true, 00:09:57.722 "data_offset": 2048, 00:09:57.722 "data_size": 63488 00:09:57.722 } 00:09:57.722 ] 00:09:57.722 }' 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.722 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.296 [2024-12-13 08:21:10.421794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.296 "name": "raid_bdev1", 00:09:58.296 "aliases": [ 00:09:58.296 "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415" 00:09:58.296 ], 00:09:58.296 "product_name": "Raid Volume", 00:09:58.296 "block_size": 512, 00:09:58.296 "num_blocks": 190464, 00:09:58.296 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:09:58.296 "assigned_rate_limits": { 00:09:58.296 "rw_ios_per_sec": 0, 00:09:58.296 "rw_mbytes_per_sec": 0, 00:09:58.296 "r_mbytes_per_sec": 0, 00:09:58.296 "w_mbytes_per_sec": 0 00:09:58.296 }, 00:09:58.296 "claimed": false, 00:09:58.296 "zoned": false, 00:09:58.296 "supported_io_types": { 00:09:58.296 "read": true, 00:09:58.296 "write": true, 00:09:58.296 "unmap": true, 00:09:58.296 "flush": true, 00:09:58.296 "reset": true, 00:09:58.296 "nvme_admin": false, 00:09:58.296 "nvme_io": false, 00:09:58.296 "nvme_io_md": false, 00:09:58.296 "write_zeroes": true, 00:09:58.296 "zcopy": false, 00:09:58.296 "get_zone_info": false, 00:09:58.296 "zone_management": false, 00:09:58.296 "zone_append": false, 00:09:58.296 "compare": false, 00:09:58.296 "compare_and_write": false, 00:09:58.296 "abort": false, 00:09:58.296 "seek_hole": false, 00:09:58.296 "seek_data": false, 00:09:58.296 "copy": false, 00:09:58.296 "nvme_iov_md": false 00:09:58.296 }, 00:09:58.296 "memory_domains": [ 00:09:58.296 { 00:09:58.296 "dma_device_id": "system", 00:09:58.296 "dma_device_type": 1 00:09:58.296 }, 00:09:58.296 { 00:09:58.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.296 "dma_device_type": 2 00:09:58.296 }, 00:09:58.296 { 00:09:58.296 "dma_device_id": "system", 00:09:58.296 "dma_device_type": 1 00:09:58.296 }, 00:09:58.296 { 00:09:58.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.296 "dma_device_type": 2 00:09:58.296 }, 00:09:58.296 { 00:09:58.296 "dma_device_id": "system", 00:09:58.296 "dma_device_type": 1 00:09:58.296 }, 00:09:58.296 { 00:09:58.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.296 "dma_device_type": 2 00:09:58.296 } 00:09:58.296 ], 00:09:58.296 "driver_specific": { 00:09:58.296 "raid": { 00:09:58.296 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:09:58.296 "strip_size_kb": 64, 00:09:58.296 "state": "online", 00:09:58.296 "raid_level": "concat", 00:09:58.296 "superblock": true, 00:09:58.296 "num_base_bdevs": 3, 00:09:58.296 "num_base_bdevs_discovered": 3, 00:09:58.296 "num_base_bdevs_operational": 3, 00:09:58.296 "base_bdevs_list": [ 00:09:58.296 { 00:09:58.296 "name": "pt1", 00:09:58.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.296 "is_configured": true, 00:09:58.296 "data_offset": 2048, 00:09:58.296 "data_size": 63488 00:09:58.296 }, 00:09:58.296 { 00:09:58.296 "name": "pt2", 00:09:58.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.296 "is_configured": true, 00:09:58.296 "data_offset": 2048, 00:09:58.296 "data_size": 63488 00:09:58.296 }, 00:09:58.296 { 00:09:58.296 "name": "pt3", 00:09:58.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.296 "is_configured": true, 00:09:58.296 "data_offset": 2048, 00:09:58.296 "data_size": 63488 00:09:58.296 } 00:09:58.296 ] 00:09:58.296 } 00:09:58.296 } 00:09:58.296 }' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:58.296 pt2 00:09:58.296 pt3' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.296 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:58.556 [2024-12-13 08:21:10.709281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b7cef632-ec30-44bd-bfa8-3fd2d6cd5415 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b7cef632-ec30-44bd-bfa8-3fd2d6cd5415 ']' 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.556 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.556 [2024-12-13 08:21:10.752867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.556 [2024-12-13 08:21:10.752942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.557 [2024-12-13 08:21:10.753054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.557 [2024-12-13 08:21:10.753163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.557 [2024-12-13 08:21:10.753215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.557 [2024-12-13 08:21:10.888676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:58.557 [2024-12-13 08:21:10.890731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:58.557 [2024-12-13 08:21:10.890842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:58.557 [2024-12-13 08:21:10.890920] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:58.557 [2024-12-13 08:21:10.891030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:58.557 [2024-12-13 08:21:10.891088] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:58.557 [2024-12-13 08:21:10.891172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.557 [2024-12-13 08:21:10.891206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:58.557 request: 00:09:58.557 { 00:09:58.557 "name": "raid_bdev1", 00:09:58.557 "raid_level": "concat", 00:09:58.557 "base_bdevs": [ 00:09:58.557 "malloc1", 00:09:58.557 "malloc2", 00:09:58.557 "malloc3" 00:09:58.557 ], 00:09:58.557 "strip_size_kb": 64, 00:09:58.557 "superblock": false, 00:09:58.557 "method": "bdev_raid_create", 00:09:58.557 "req_id": 1 00:09:58.557 } 00:09:58.557 Got JSON-RPC error response 00:09:58.557 response: 00:09:58.557 { 00:09:58.557 "code": -17, 00:09:58.557 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:58.557 } 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.557 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.817 [2024-12-13 08:21:10.956505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.817 [2024-12-13 08:21:10.956606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.817 [2024-12-13 08:21:10.956630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:58.817 [2024-12-13 08:21:10.956639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.817 [2024-12-13 08:21:10.959081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.817 [2024-12-13 08:21:10.959127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.817 [2024-12-13 08:21:10.959221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:58.817 [2024-12-13 08:21:10.959288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:58.817 pt1 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.817 08:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.818 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.818 "name": "raid_bdev1", 00:09:58.818 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:09:58.818 "strip_size_kb": 64, 00:09:58.818 "state": "configuring", 00:09:58.818 "raid_level": "concat", 00:09:58.818 "superblock": true, 00:09:58.818 "num_base_bdevs": 3, 00:09:58.818 "num_base_bdevs_discovered": 1, 00:09:58.818 "num_base_bdevs_operational": 3, 00:09:58.818 "base_bdevs_list": [ 00:09:58.818 { 00:09:58.818 "name": "pt1", 00:09:58.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.818 "is_configured": true, 00:09:58.818 "data_offset": 2048, 00:09:58.818 "data_size": 63488 00:09:58.818 }, 00:09:58.818 { 00:09:58.818 "name": null, 00:09:58.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.818 "is_configured": false, 00:09:58.818 "data_offset": 2048, 00:09:58.818 "data_size": 63488 00:09:58.818 }, 00:09:58.818 { 00:09:58.818 "name": null, 00:09:58.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.818 "is_configured": false, 00:09:58.818 "data_offset": 2048, 00:09:58.818 "data_size": 63488 00:09:58.818 } 00:09:58.818 ] 00:09:58.818 }' 00:09:58.818 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.818 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 [2024-12-13 08:21:11.411759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.078 [2024-12-13 08:21:11.411882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.078 [2024-12-13 08:21:11.411927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:59.078 [2024-12-13 08:21:11.411956] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.078 [2024-12-13 08:21:11.412489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.078 [2024-12-13 08:21:11.412550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.078 [2024-12-13 08:21:11.412682] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.078 [2024-12-13 08:21:11.412745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.078 pt2 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 [2024-12-13 08:21:11.423746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.078 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.338 "name": "raid_bdev1", 00:09:59.338 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:09:59.338 "strip_size_kb": 64, 00:09:59.338 "state": "configuring", 00:09:59.338 "raid_level": "concat", 00:09:59.338 "superblock": true, 00:09:59.338 "num_base_bdevs": 3, 00:09:59.338 "num_base_bdevs_discovered": 1, 00:09:59.338 "num_base_bdevs_operational": 3, 00:09:59.338 "base_bdevs_list": [ 00:09:59.338 { 00:09:59.338 "name": "pt1", 00:09:59.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.338 "is_configured": true, 00:09:59.338 "data_offset": 2048, 00:09:59.338 "data_size": 63488 00:09:59.338 }, 00:09:59.338 { 00:09:59.338 "name": null, 00:09:59.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.338 "is_configured": false, 00:09:59.338 "data_offset": 0, 00:09:59.338 "data_size": 63488 00:09:59.338 }, 00:09:59.338 { 00:09:59.338 "name": null, 00:09:59.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.338 "is_configured": false, 00:09:59.338 "data_offset": 2048, 00:09:59.338 "data_size": 63488 00:09:59.338 } 00:09:59.338 ] 00:09:59.338 }' 00:09:59.338 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.338 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.598 [2024-12-13 08:21:11.918882] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.598 [2024-12-13 08:21:11.919027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.598 [2024-12-13 08:21:11.919068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:59.598 [2024-12-13 08:21:11.919142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.598 [2024-12-13 08:21:11.919690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.598 [2024-12-13 08:21:11.919757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.598 [2024-12-13 08:21:11.919877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.598 [2024-12-13 08:21:11.919934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.598 pt2 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.598 [2024-12-13 08:21:11.930829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.598 [2024-12-13 08:21:11.930934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.598 [2024-12-13 08:21:11.930967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:59.598 [2024-12-13 08:21:11.930996] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.598 [2024-12-13 08:21:11.931457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.598 [2024-12-13 08:21:11.931519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.598 [2024-12-13 08:21:11.931610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:59.598 [2024-12-13 08:21:11.931657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.598 [2024-12-13 08:21:11.931800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:59.598 [2024-12-13 08:21:11.931840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.598 [2024-12-13 08:21:11.932094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:59.598 [2024-12-13 08:21:11.932279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:59.598 [2024-12-13 08:21:11.932317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:59.598 [2024-12-13 08:21:11.932490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.598 pt3 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.598 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.599 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.859 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.859 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.859 "name": "raid_bdev1", 00:09:59.859 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:09:59.859 "strip_size_kb": 64, 00:09:59.859 "state": "online", 00:09:59.859 "raid_level": "concat", 00:09:59.859 "superblock": true, 00:09:59.859 "num_base_bdevs": 3, 00:09:59.859 "num_base_bdevs_discovered": 3, 00:09:59.859 "num_base_bdevs_operational": 3, 00:09:59.859 "base_bdevs_list": [ 00:09:59.859 { 00:09:59.859 "name": "pt1", 00:09:59.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.859 "is_configured": true, 00:09:59.859 "data_offset": 2048, 00:09:59.859 "data_size": 63488 00:09:59.859 }, 00:09:59.859 { 00:09:59.859 "name": "pt2", 00:09:59.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.859 "is_configured": true, 00:09:59.859 "data_offset": 2048, 00:09:59.859 "data_size": 63488 00:09:59.859 }, 00:09:59.859 { 00:09:59.859 "name": "pt3", 00:09:59.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.859 "is_configured": true, 00:09:59.859 "data_offset": 2048, 00:09:59.859 "data_size": 63488 00:09:59.859 } 00:09:59.859 ] 00:09:59.859 }' 00:09:59.859 08:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.859 08:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.119 [2024-12-13 08:21:12.358481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.119 "name": "raid_bdev1", 00:10:00.119 "aliases": [ 00:10:00.119 "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415" 00:10:00.119 ], 00:10:00.119 "product_name": "Raid Volume", 00:10:00.119 "block_size": 512, 00:10:00.119 "num_blocks": 190464, 00:10:00.119 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:10:00.119 "assigned_rate_limits": { 00:10:00.119 "rw_ios_per_sec": 0, 00:10:00.119 "rw_mbytes_per_sec": 0, 00:10:00.119 "r_mbytes_per_sec": 0, 00:10:00.119 "w_mbytes_per_sec": 0 00:10:00.119 }, 00:10:00.119 "claimed": false, 00:10:00.119 "zoned": false, 00:10:00.119 "supported_io_types": { 00:10:00.119 "read": true, 00:10:00.119 "write": true, 00:10:00.119 "unmap": true, 00:10:00.119 "flush": true, 00:10:00.119 "reset": true, 00:10:00.119 "nvme_admin": false, 00:10:00.119 "nvme_io": false, 00:10:00.119 "nvme_io_md": false, 00:10:00.119 "write_zeroes": true, 00:10:00.119 "zcopy": false, 00:10:00.119 "get_zone_info": false, 00:10:00.119 "zone_management": false, 00:10:00.119 "zone_append": false, 00:10:00.119 "compare": false, 00:10:00.119 "compare_and_write": false, 00:10:00.119 "abort": false, 00:10:00.119 "seek_hole": false, 00:10:00.119 "seek_data": false, 00:10:00.119 "copy": false, 00:10:00.119 "nvme_iov_md": false 00:10:00.119 }, 00:10:00.119 "memory_domains": [ 00:10:00.119 { 00:10:00.119 "dma_device_id": "system", 00:10:00.119 "dma_device_type": 1 00:10:00.119 }, 00:10:00.119 { 00:10:00.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.119 "dma_device_type": 2 00:10:00.119 }, 00:10:00.119 { 00:10:00.119 "dma_device_id": "system", 00:10:00.119 "dma_device_type": 1 00:10:00.119 }, 00:10:00.119 { 00:10:00.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.119 "dma_device_type": 2 00:10:00.119 }, 00:10:00.119 { 00:10:00.119 "dma_device_id": "system", 00:10:00.119 "dma_device_type": 1 00:10:00.119 }, 00:10:00.119 { 00:10:00.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.119 "dma_device_type": 2 00:10:00.119 } 00:10:00.119 ], 00:10:00.119 "driver_specific": { 00:10:00.119 "raid": { 00:10:00.119 "uuid": "b7cef632-ec30-44bd-bfa8-3fd2d6cd5415", 00:10:00.119 "strip_size_kb": 64, 00:10:00.119 "state": "online", 00:10:00.119 "raid_level": "concat", 00:10:00.119 "superblock": true, 00:10:00.119 "num_base_bdevs": 3, 00:10:00.119 "num_base_bdevs_discovered": 3, 00:10:00.119 "num_base_bdevs_operational": 3, 00:10:00.119 "base_bdevs_list": [ 00:10:00.119 { 00:10:00.119 "name": "pt1", 00:10:00.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.119 "is_configured": true, 00:10:00.119 "data_offset": 2048, 00:10:00.119 "data_size": 63488 00:10:00.119 }, 00:10:00.119 { 00:10:00.119 "name": "pt2", 00:10:00.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.119 "is_configured": true, 00:10:00.119 "data_offset": 2048, 00:10:00.119 "data_size": 63488 00:10:00.119 }, 00:10:00.119 { 00:10:00.119 "name": "pt3", 00:10:00.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.119 "is_configured": true, 00:10:00.119 "data_offset": 2048, 00:10:00.119 "data_size": 63488 00:10:00.119 } 00:10:00.119 ] 00:10:00.119 } 00:10:00.119 } 00:10:00.119 }' 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:00.119 pt2 00:10:00.119 pt3' 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.119 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.379 [2024-12-13 08:21:12.641996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b7cef632-ec30-44bd-bfa8-3fd2d6cd5415 '!=' b7cef632-ec30-44bd-bfa8-3fd2d6cd5415 ']' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67012 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67012 ']' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67012 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67012 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67012' 00:10:00.379 killing process with pid 67012 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67012 00:10:00.379 [2024-12-13 08:21:12.740307] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.379 [2024-12-13 08:21:12.740470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.379 08:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67012 00:10:00.379 [2024-12-13 08:21:12.740583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.379 [2024-12-13 08:21:12.740638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:00.949 [2024-12-13 08:21:13.040998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.887 08:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:01.887 00:10:01.887 real 0m5.315s 00:10:01.887 user 0m7.666s 00:10:01.887 sys 0m0.887s 00:10:01.887 08:21:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.887 08:21:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.887 ************************************ 00:10:01.887 END TEST raid_superblock_test 00:10:01.887 ************************************ 00:10:01.887 08:21:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:01.887 08:21:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.887 08:21:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.887 08:21:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.887 ************************************ 00:10:01.887 START TEST raid_read_error_test 00:10:01.887 ************************************ 00:10:01.887 08:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:01.887 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rlcsf2K3Si 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67265 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67265 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67265 ']' 00:10:02.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.145 08:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.145 [2024-12-13 08:21:14.361307] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:02.145 [2024-12-13 08:21:14.361439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67265 ] 00:10:02.405 [2024-12-13 08:21:14.517772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.405 [2024-12-13 08:21:14.640361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.675 [2024-12-13 08:21:14.857095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.675 [2024-12-13 08:21:14.857165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.934 BaseBdev1_malloc 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.934 true 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.934 [2024-12-13 08:21:15.289533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:02.934 [2024-12-13 08:21:15.289654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.934 [2024-12-13 08:21:15.289697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:02.934 [2024-12-13 08:21:15.289732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.934 [2024-12-13 08:21:15.292079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.934 [2024-12-13 08:21:15.292182] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:02.934 BaseBdev1 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.934 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 BaseBdev2_malloc 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 true 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 [2024-12-13 08:21:15.355906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:03.193 [2024-12-13 08:21:15.356009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.193 [2024-12-13 08:21:15.356045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:03.193 [2024-12-13 08:21:15.356075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.193 [2024-12-13 08:21:15.358200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.193 [2024-12-13 08:21:15.358275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:03.193 BaseBdev2 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 BaseBdev3_malloc 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 true 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 [2024-12-13 08:21:15.437685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:03.193 [2024-12-13 08:21:15.437782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.193 [2024-12-13 08:21:15.437847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:03.193 [2024-12-13 08:21:15.437890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.193 [2024-12-13 08:21:15.440165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.193 [2024-12-13 08:21:15.440237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:03.193 BaseBdev3 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 [2024-12-13 08:21:15.449755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.193 [2024-12-13 08:21:15.451856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.193 [2024-12-13 08:21:15.452003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.193 [2024-12-13 08:21:15.452271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:03.193 [2024-12-13 08:21:15.452287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:03.193 [2024-12-13 08:21:15.452576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:03.193 [2024-12-13 08:21:15.452752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:03.193 [2024-12-13 08:21:15.452766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:03.193 [2024-12-13 08:21:15.452931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.193 "name": "raid_bdev1", 00:10:03.193 "uuid": "619eff3a-9ce1-4086-9345-317c80a47aff", 00:10:03.193 "strip_size_kb": 64, 00:10:03.193 "state": "online", 00:10:03.193 "raid_level": "concat", 00:10:03.193 "superblock": true, 00:10:03.193 "num_base_bdevs": 3, 00:10:03.193 "num_base_bdevs_discovered": 3, 00:10:03.193 "num_base_bdevs_operational": 3, 00:10:03.193 "base_bdevs_list": [ 00:10:03.193 { 00:10:03.193 "name": "BaseBdev1", 00:10:03.193 "uuid": "ec146cbf-f3df-5d5c-ae9e-90b597c0a286", 00:10:03.193 "is_configured": true, 00:10:03.193 "data_offset": 2048, 00:10:03.193 "data_size": 63488 00:10:03.193 }, 00:10:03.193 { 00:10:03.193 "name": "BaseBdev2", 00:10:03.193 "uuid": "8abc9db9-58e1-500a-9af9-b43ef389f176", 00:10:03.193 "is_configured": true, 00:10:03.193 "data_offset": 2048, 00:10:03.193 "data_size": 63488 00:10:03.193 }, 00:10:03.193 { 00:10:03.193 "name": "BaseBdev3", 00:10:03.194 "uuid": "024a25d8-ef2e-5954-bf4e-419ace1a6997", 00:10:03.194 "is_configured": true, 00:10:03.194 "data_offset": 2048, 00:10:03.194 "data_size": 63488 00:10:03.194 } 00:10:03.194 ] 00:10:03.194 }' 00:10:03.194 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.194 08:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.827 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:03.827 08:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:03.827 [2024-12-13 08:21:15.966286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.765 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.765 "name": "raid_bdev1", 00:10:04.765 "uuid": "619eff3a-9ce1-4086-9345-317c80a47aff", 00:10:04.765 "strip_size_kb": 64, 00:10:04.765 "state": "online", 00:10:04.765 "raid_level": "concat", 00:10:04.765 "superblock": true, 00:10:04.765 "num_base_bdevs": 3, 00:10:04.765 "num_base_bdevs_discovered": 3, 00:10:04.765 "num_base_bdevs_operational": 3, 00:10:04.765 "base_bdevs_list": [ 00:10:04.765 { 00:10:04.765 "name": "BaseBdev1", 00:10:04.765 "uuid": "ec146cbf-f3df-5d5c-ae9e-90b597c0a286", 00:10:04.765 "is_configured": true, 00:10:04.765 "data_offset": 2048, 00:10:04.765 "data_size": 63488 00:10:04.765 }, 00:10:04.765 { 00:10:04.766 "name": "BaseBdev2", 00:10:04.766 "uuid": "8abc9db9-58e1-500a-9af9-b43ef389f176", 00:10:04.766 "is_configured": true, 00:10:04.766 "data_offset": 2048, 00:10:04.766 "data_size": 63488 00:10:04.766 }, 00:10:04.766 { 00:10:04.766 "name": "BaseBdev3", 00:10:04.766 "uuid": "024a25d8-ef2e-5954-bf4e-419ace1a6997", 00:10:04.766 "is_configured": true, 00:10:04.766 "data_offset": 2048, 00:10:04.766 "data_size": 63488 00:10:04.766 } 00:10:04.766 ] 00:10:04.766 }' 00:10:04.766 08:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.766 08:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.026 [2024-12-13 08:21:17.334827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.026 [2024-12-13 08:21:17.334946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.026 [2024-12-13 08:21:17.338340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.026 [2024-12-13 08:21:17.338437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.026 [2024-12-13 08:21:17.338515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.026 [2024-12-13 08:21:17.338580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:05.026 { 00:10:05.026 "results": [ 00:10:05.026 { 00:10:05.026 "job": "raid_bdev1", 00:10:05.026 "core_mask": "0x1", 00:10:05.026 "workload": "randrw", 00:10:05.026 "percentage": 50, 00:10:05.026 "status": "finished", 00:10:05.026 "queue_depth": 1, 00:10:05.026 "io_size": 131072, 00:10:05.026 "runtime": 1.369543, 00:10:05.026 "iops": 14869.923762890248, 00:10:05.026 "mibps": 1858.740470361281, 00:10:05.026 "io_failed": 1, 00:10:05.026 "io_timeout": 0, 00:10:05.026 "avg_latency_us": 93.16283985596338, 00:10:05.026 "min_latency_us": 27.50043668122271, 00:10:05.026 "max_latency_us": 1366.5257641921398 00:10:05.026 } 00:10:05.026 ], 00:10:05.026 "core_count": 1 00:10:05.026 } 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67265 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67265 ']' 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67265 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67265 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.026 killing process with pid 67265 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67265' 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67265 00:10:05.026 [2024-12-13 08:21:17.387949] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.026 08:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67265 00:10:05.286 [2024-12-13 08:21:17.626467] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rlcsf2K3Si 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:06.667 ************************************ 00:10:06.667 END TEST raid_read_error_test 00:10:06.667 ************************************ 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:06.667 00:10:06.667 real 0m4.608s 00:10:06.667 user 0m5.477s 00:10:06.667 sys 0m0.562s 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.667 08:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.667 08:21:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:06.667 08:21:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.667 08:21:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.667 08:21:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.667 ************************************ 00:10:06.667 START TEST raid_write_error_test 00:10:06.667 ************************************ 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NtownG3YFY 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67416 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67416 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67416 ']' 00:10:06.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.667 08:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.927 [2024-12-13 08:21:19.037329] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:06.927 [2024-12-13 08:21:19.037454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67416 ] 00:10:06.927 [2024-12-13 08:21:19.214366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.187 [2024-12-13 08:21:19.330991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.187 [2024-12-13 08:21:19.536342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.187 [2024-12-13 08:21:19.536386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 BaseBdev1_malloc 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 true 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 [2024-12-13 08:21:19.971190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:07.757 [2024-12-13 08:21:19.971291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.757 [2024-12-13 08:21:19.971329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:07.757 [2024-12-13 08:21:19.971361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.757 [2024-12-13 08:21:19.973501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.757 [2024-12-13 08:21:19.973575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:07.757 BaseBdev1 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 BaseBdev2_malloc 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 true 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 [2024-12-13 08:21:20.038546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:07.757 [2024-12-13 08:21:20.038600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.757 [2024-12-13 08:21:20.038617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:07.757 [2024-12-13 08:21:20.038628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.757 [2024-12-13 08:21:20.040732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.757 [2024-12-13 08:21:20.040768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:07.757 BaseBdev2 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 BaseBdev3_malloc 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.757 true 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.757 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 [2024-12-13 08:21:20.120888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:08.017 [2024-12-13 08:21:20.120957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.017 [2024-12-13 08:21:20.120984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:08.017 [2024-12-13 08:21:20.121000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.017 [2024-12-13 08:21:20.123751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.017 [2024-12-13 08:21:20.123814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:08.017 BaseBdev3 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 [2024-12-13 08:21:20.132958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.017 [2024-12-13 08:21:20.135170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.017 [2024-12-13 08:21:20.135273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.017 [2024-12-13 08:21:20.135578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:08.017 [2024-12-13 08:21:20.135610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:08.017 [2024-12-13 08:21:20.135944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:08.017 [2024-12-13 08:21:20.136190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:08.017 [2024-12-13 08:21:20.136224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:08.017 [2024-12-13 08:21:20.136428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.017 "name": "raid_bdev1", 00:10:08.017 "uuid": "9b3e9745-e792-4664-a28f-0bb45586df53", 00:10:08.017 "strip_size_kb": 64, 00:10:08.017 "state": "online", 00:10:08.017 "raid_level": "concat", 00:10:08.017 "superblock": true, 00:10:08.017 "num_base_bdevs": 3, 00:10:08.017 "num_base_bdevs_discovered": 3, 00:10:08.017 "num_base_bdevs_operational": 3, 00:10:08.017 "base_bdevs_list": [ 00:10:08.017 { 00:10:08.017 "name": "BaseBdev1", 00:10:08.017 "uuid": "4132d554-3b68-547e-9cbc-68ecac32c1d1", 00:10:08.017 "is_configured": true, 00:10:08.017 "data_offset": 2048, 00:10:08.017 "data_size": 63488 00:10:08.017 }, 00:10:08.017 { 00:10:08.017 "name": "BaseBdev2", 00:10:08.017 "uuid": "b58a9f8a-bd07-5ab1-af5a-dc79f5e9efbb", 00:10:08.017 "is_configured": true, 00:10:08.017 "data_offset": 2048, 00:10:08.017 "data_size": 63488 00:10:08.017 }, 00:10:08.017 { 00:10:08.017 "name": "BaseBdev3", 00:10:08.017 "uuid": "03fde311-3e9b-51e6-aeca-66ad2d288d37", 00:10:08.017 "is_configured": true, 00:10:08.017 "data_offset": 2048, 00:10:08.017 "data_size": 63488 00:10:08.017 } 00:10:08.017 ] 00:10:08.017 }' 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.017 08:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.277 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:08.277 08:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:08.537 [2024-12-13 08:21:20.725131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.476 "name": "raid_bdev1", 00:10:09.476 "uuid": "9b3e9745-e792-4664-a28f-0bb45586df53", 00:10:09.476 "strip_size_kb": 64, 00:10:09.476 "state": "online", 00:10:09.476 "raid_level": "concat", 00:10:09.476 "superblock": true, 00:10:09.476 "num_base_bdevs": 3, 00:10:09.476 "num_base_bdevs_discovered": 3, 00:10:09.476 "num_base_bdevs_operational": 3, 00:10:09.476 "base_bdevs_list": [ 00:10:09.476 { 00:10:09.476 "name": "BaseBdev1", 00:10:09.476 "uuid": "4132d554-3b68-547e-9cbc-68ecac32c1d1", 00:10:09.476 "is_configured": true, 00:10:09.476 "data_offset": 2048, 00:10:09.476 "data_size": 63488 00:10:09.476 }, 00:10:09.476 { 00:10:09.476 "name": "BaseBdev2", 00:10:09.476 "uuid": "b58a9f8a-bd07-5ab1-af5a-dc79f5e9efbb", 00:10:09.476 "is_configured": true, 00:10:09.476 "data_offset": 2048, 00:10:09.476 "data_size": 63488 00:10:09.476 }, 00:10:09.476 { 00:10:09.476 "name": "BaseBdev3", 00:10:09.476 "uuid": "03fde311-3e9b-51e6-aeca-66ad2d288d37", 00:10:09.476 "is_configured": true, 00:10:09.476 "data_offset": 2048, 00:10:09.476 "data_size": 63488 00:10:09.476 } 00:10:09.476 ] 00:10:09.476 }' 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.476 08:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.045 [2024-12-13 08:21:22.117623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.045 [2024-12-13 08:21:22.117659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.045 [2024-12-13 08:21:22.120349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.045 [2024-12-13 08:21:22.120398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.045 [2024-12-13 08:21:22.120435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.045 [2024-12-13 08:21:22.120447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:10.045 { 00:10:10.045 "results": [ 00:10:10.045 { 00:10:10.045 "job": "raid_bdev1", 00:10:10.045 "core_mask": "0x1", 00:10:10.045 "workload": "randrw", 00:10:10.045 "percentage": 50, 00:10:10.045 "status": "finished", 00:10:10.045 "queue_depth": 1, 00:10:10.045 "io_size": 131072, 00:10:10.045 "runtime": 1.393441, 00:10:10.045 "iops": 14895.4997018173, 00:10:10.045 "mibps": 1861.9374627271625, 00:10:10.045 "io_failed": 1, 00:10:10.045 "io_timeout": 0, 00:10:10.045 "avg_latency_us": 92.96836889665042, 00:10:10.045 "min_latency_us": 26.829694323144103, 00:10:10.045 "max_latency_us": 1373.6803493449781 00:10:10.045 } 00:10:10.045 ], 00:10:10.045 "core_count": 1 00:10:10.045 } 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67416 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67416 ']' 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67416 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67416 00:10:10.045 killing process with pid 67416 00:10:10.045 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.046 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.046 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67416' 00:10:10.046 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67416 00:10:10.046 [2024-12-13 08:21:22.152389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.046 08:21:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67416 00:10:10.046 [2024-12-13 08:21:22.384019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NtownG3YFY 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:11.425 00:10:11.425 real 0m4.672s 00:10:11.425 user 0m5.594s 00:10:11.425 sys 0m0.608s 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.425 08:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.425 ************************************ 00:10:11.425 END TEST raid_write_error_test 00:10:11.425 ************************************ 00:10:11.425 08:21:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:11.425 08:21:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:11.425 08:21:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.425 08:21:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.425 08:21:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.425 ************************************ 00:10:11.425 START TEST raid_state_function_test 00:10:11.425 ************************************ 00:10:11.425 08:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:11.425 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:11.425 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:11.425 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:11.425 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:11.425 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:11.425 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67554 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67554' 00:10:11.426 Process raid pid: 67554 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67554 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67554 ']' 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.426 08:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.426 [2024-12-13 08:21:23.770038] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:11.426 [2024-12-13 08:21:23.770167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.685 [2024-12-13 08:21:23.943147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.944 [2024-12-13 08:21:24.066146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.944 [2024-12-13 08:21:24.295670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.944 [2024-12-13 08:21:24.295709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.510 [2024-12-13 08:21:24.621574] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.510 [2024-12-13 08:21:24.621642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.510 [2024-12-13 08:21:24.621653] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.510 [2024-12-13 08:21:24.621664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.510 [2024-12-13 08:21:24.621670] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.510 [2024-12-13 08:21:24.621679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.510 "name": "Existed_Raid", 00:10:12.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.510 "strip_size_kb": 0, 00:10:12.510 "state": "configuring", 00:10:12.510 "raid_level": "raid1", 00:10:12.510 "superblock": false, 00:10:12.510 "num_base_bdevs": 3, 00:10:12.510 "num_base_bdevs_discovered": 0, 00:10:12.510 "num_base_bdevs_operational": 3, 00:10:12.510 "base_bdevs_list": [ 00:10:12.510 { 00:10:12.510 "name": "BaseBdev1", 00:10:12.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.510 "is_configured": false, 00:10:12.510 "data_offset": 0, 00:10:12.510 "data_size": 0 00:10:12.510 }, 00:10:12.510 { 00:10:12.510 "name": "BaseBdev2", 00:10:12.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.510 "is_configured": false, 00:10:12.510 "data_offset": 0, 00:10:12.510 "data_size": 0 00:10:12.510 }, 00:10:12.510 { 00:10:12.510 "name": "BaseBdev3", 00:10:12.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.510 "is_configured": false, 00:10:12.510 "data_offset": 0, 00:10:12.510 "data_size": 0 00:10:12.510 } 00:10:12.510 ] 00:10:12.510 }' 00:10:12.510 08:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.511 08:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.775 [2024-12-13 08:21:25.076777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.775 [2024-12-13 08:21:25.076825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.775 [2024-12-13 08:21:25.088739] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.775 [2024-12-13 08:21:25.088789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.775 [2024-12-13 08:21:25.088799] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.775 [2024-12-13 08:21:25.088809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.775 [2024-12-13 08:21:25.088816] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.775 [2024-12-13 08:21:25.088826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.775 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.067 [2024-12-13 08:21:25.135982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.067 BaseBdev1 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.067 [ 00:10:13.067 { 00:10:13.067 "name": "BaseBdev1", 00:10:13.067 "aliases": [ 00:10:13.067 "6ffc28ba-d7ef-482b-a39b-6b80f06d3752" 00:10:13.067 ], 00:10:13.067 "product_name": "Malloc disk", 00:10:13.067 "block_size": 512, 00:10:13.067 "num_blocks": 65536, 00:10:13.067 "uuid": "6ffc28ba-d7ef-482b-a39b-6b80f06d3752", 00:10:13.067 "assigned_rate_limits": { 00:10:13.067 "rw_ios_per_sec": 0, 00:10:13.067 "rw_mbytes_per_sec": 0, 00:10:13.067 "r_mbytes_per_sec": 0, 00:10:13.067 "w_mbytes_per_sec": 0 00:10:13.067 }, 00:10:13.067 "claimed": true, 00:10:13.067 "claim_type": "exclusive_write", 00:10:13.067 "zoned": false, 00:10:13.067 "supported_io_types": { 00:10:13.067 "read": true, 00:10:13.067 "write": true, 00:10:13.067 "unmap": true, 00:10:13.067 "flush": true, 00:10:13.067 "reset": true, 00:10:13.067 "nvme_admin": false, 00:10:13.067 "nvme_io": false, 00:10:13.067 "nvme_io_md": false, 00:10:13.067 "write_zeroes": true, 00:10:13.067 "zcopy": true, 00:10:13.067 "get_zone_info": false, 00:10:13.067 "zone_management": false, 00:10:13.067 "zone_append": false, 00:10:13.067 "compare": false, 00:10:13.067 "compare_and_write": false, 00:10:13.067 "abort": true, 00:10:13.067 "seek_hole": false, 00:10:13.067 "seek_data": false, 00:10:13.067 "copy": true, 00:10:13.067 "nvme_iov_md": false 00:10:13.067 }, 00:10:13.067 "memory_domains": [ 00:10:13.067 { 00:10:13.067 "dma_device_id": "system", 00:10:13.067 "dma_device_type": 1 00:10:13.067 }, 00:10:13.067 { 00:10:13.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.067 "dma_device_type": 2 00:10:13.067 } 00:10:13.067 ], 00:10:13.067 "driver_specific": {} 00:10:13.067 } 00:10:13.067 ] 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.067 "name": "Existed_Raid", 00:10:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.067 "strip_size_kb": 0, 00:10:13.067 "state": "configuring", 00:10:13.067 "raid_level": "raid1", 00:10:13.067 "superblock": false, 00:10:13.067 "num_base_bdevs": 3, 00:10:13.067 "num_base_bdevs_discovered": 1, 00:10:13.067 "num_base_bdevs_operational": 3, 00:10:13.067 "base_bdevs_list": [ 00:10:13.067 { 00:10:13.067 "name": "BaseBdev1", 00:10:13.067 "uuid": "6ffc28ba-d7ef-482b-a39b-6b80f06d3752", 00:10:13.067 "is_configured": true, 00:10:13.067 "data_offset": 0, 00:10:13.067 "data_size": 65536 00:10:13.067 }, 00:10:13.067 { 00:10:13.067 "name": "BaseBdev2", 00:10:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.067 "is_configured": false, 00:10:13.067 "data_offset": 0, 00:10:13.067 "data_size": 0 00:10:13.067 }, 00:10:13.067 { 00:10:13.067 "name": "BaseBdev3", 00:10:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.067 "is_configured": false, 00:10:13.067 "data_offset": 0, 00:10:13.067 "data_size": 0 00:10:13.067 } 00:10:13.067 ] 00:10:13.067 }' 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.067 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.325 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.325 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.325 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.325 [2024-12-13 08:21:25.655162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.325 [2024-12-13 08:21:25.655222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:13.325 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.325 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.325 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.325 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.325 [2024-12-13 08:21:25.667164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.326 [2024-12-13 08:21:25.669167] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.326 [2024-12-13 08:21:25.669207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.326 [2024-12-13 08:21:25.669218] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.326 [2024-12-13 08:21:25.669226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.326 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.584 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.584 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.584 "name": "Existed_Raid", 00:10:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.584 "strip_size_kb": 0, 00:10:13.584 "state": "configuring", 00:10:13.584 "raid_level": "raid1", 00:10:13.584 "superblock": false, 00:10:13.584 "num_base_bdevs": 3, 00:10:13.584 "num_base_bdevs_discovered": 1, 00:10:13.584 "num_base_bdevs_operational": 3, 00:10:13.584 "base_bdevs_list": [ 00:10:13.584 { 00:10:13.584 "name": "BaseBdev1", 00:10:13.584 "uuid": "6ffc28ba-d7ef-482b-a39b-6b80f06d3752", 00:10:13.584 "is_configured": true, 00:10:13.584 "data_offset": 0, 00:10:13.584 "data_size": 65536 00:10:13.584 }, 00:10:13.584 { 00:10:13.584 "name": "BaseBdev2", 00:10:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.584 "is_configured": false, 00:10:13.584 "data_offset": 0, 00:10:13.584 "data_size": 0 00:10:13.584 }, 00:10:13.584 { 00:10:13.584 "name": "BaseBdev3", 00:10:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.584 "is_configured": false, 00:10:13.584 "data_offset": 0, 00:10:13.584 "data_size": 0 00:10:13.584 } 00:10:13.584 ] 00:10:13.584 }' 00:10:13.584 08:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.584 08:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.842 [2024-12-13 08:21:26.172027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.842 BaseBdev2 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.842 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.842 [ 00:10:13.842 { 00:10:13.842 "name": "BaseBdev2", 00:10:13.842 "aliases": [ 00:10:13.842 "7c733277-6de3-4b56-8971-63b4947c8764" 00:10:13.842 ], 00:10:13.842 "product_name": "Malloc disk", 00:10:13.842 "block_size": 512, 00:10:13.842 "num_blocks": 65536, 00:10:13.842 "uuid": "7c733277-6de3-4b56-8971-63b4947c8764", 00:10:13.842 "assigned_rate_limits": { 00:10:13.842 "rw_ios_per_sec": 0, 00:10:13.842 "rw_mbytes_per_sec": 0, 00:10:13.842 "r_mbytes_per_sec": 0, 00:10:13.842 "w_mbytes_per_sec": 0 00:10:13.842 }, 00:10:13.842 "claimed": true, 00:10:13.842 "claim_type": "exclusive_write", 00:10:13.842 "zoned": false, 00:10:13.842 "supported_io_types": { 00:10:13.842 "read": true, 00:10:13.842 "write": true, 00:10:13.843 "unmap": true, 00:10:13.843 "flush": true, 00:10:13.843 "reset": true, 00:10:13.843 "nvme_admin": false, 00:10:13.843 "nvme_io": false, 00:10:13.843 "nvme_io_md": false, 00:10:13.843 "write_zeroes": true, 00:10:13.843 "zcopy": true, 00:10:13.843 "get_zone_info": false, 00:10:13.843 "zone_management": false, 00:10:13.843 "zone_append": false, 00:10:13.843 "compare": false, 00:10:13.843 "compare_and_write": false, 00:10:13.843 "abort": true, 00:10:13.843 "seek_hole": false, 00:10:13.843 "seek_data": false, 00:10:13.843 "copy": true, 00:10:13.843 "nvme_iov_md": false 00:10:13.843 }, 00:10:13.843 "memory_domains": [ 00:10:13.843 { 00:10:13.843 "dma_device_id": "system", 00:10:13.843 "dma_device_type": 1 00:10:13.843 }, 00:10:13.843 { 00:10:13.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.843 "dma_device_type": 2 00:10:14.101 } 00:10:14.101 ], 00:10:14.101 "driver_specific": {} 00:10:14.101 } 00:10:14.101 ] 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.101 "name": "Existed_Raid", 00:10:14.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.101 "strip_size_kb": 0, 00:10:14.101 "state": "configuring", 00:10:14.101 "raid_level": "raid1", 00:10:14.101 "superblock": false, 00:10:14.101 "num_base_bdevs": 3, 00:10:14.101 "num_base_bdevs_discovered": 2, 00:10:14.101 "num_base_bdevs_operational": 3, 00:10:14.101 "base_bdevs_list": [ 00:10:14.101 { 00:10:14.101 "name": "BaseBdev1", 00:10:14.101 "uuid": "6ffc28ba-d7ef-482b-a39b-6b80f06d3752", 00:10:14.101 "is_configured": true, 00:10:14.101 "data_offset": 0, 00:10:14.101 "data_size": 65536 00:10:14.101 }, 00:10:14.101 { 00:10:14.101 "name": "BaseBdev2", 00:10:14.101 "uuid": "7c733277-6de3-4b56-8971-63b4947c8764", 00:10:14.101 "is_configured": true, 00:10:14.101 "data_offset": 0, 00:10:14.101 "data_size": 65536 00:10:14.101 }, 00:10:14.101 { 00:10:14.101 "name": "BaseBdev3", 00:10:14.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.101 "is_configured": false, 00:10:14.101 "data_offset": 0, 00:10:14.101 "data_size": 0 00:10:14.101 } 00:10:14.101 ] 00:10:14.101 }' 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.101 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.359 [2024-12-13 08:21:26.672361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.359 [2024-12-13 08:21:26.672416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:14.359 [2024-12-13 08:21:26.672429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:14.359 [2024-12-13 08:21:26.672700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:14.359 [2024-12-13 08:21:26.672868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:14.359 [2024-12-13 08:21:26.672882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:14.359 [2024-12-13 08:21:26.673146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.359 BaseBdev3 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.359 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.359 [ 00:10:14.359 { 00:10:14.359 "name": "BaseBdev3", 00:10:14.359 "aliases": [ 00:10:14.359 "42a6befd-a1f6-414e-a2c4-96c7fbda79d8" 00:10:14.359 ], 00:10:14.359 "product_name": "Malloc disk", 00:10:14.359 "block_size": 512, 00:10:14.359 "num_blocks": 65536, 00:10:14.359 "uuid": "42a6befd-a1f6-414e-a2c4-96c7fbda79d8", 00:10:14.359 "assigned_rate_limits": { 00:10:14.359 "rw_ios_per_sec": 0, 00:10:14.359 "rw_mbytes_per_sec": 0, 00:10:14.359 "r_mbytes_per_sec": 0, 00:10:14.359 "w_mbytes_per_sec": 0 00:10:14.359 }, 00:10:14.359 "claimed": true, 00:10:14.359 "claim_type": "exclusive_write", 00:10:14.359 "zoned": false, 00:10:14.359 "supported_io_types": { 00:10:14.359 "read": true, 00:10:14.359 "write": true, 00:10:14.359 "unmap": true, 00:10:14.360 "flush": true, 00:10:14.360 "reset": true, 00:10:14.360 "nvme_admin": false, 00:10:14.360 "nvme_io": false, 00:10:14.360 "nvme_io_md": false, 00:10:14.360 "write_zeroes": true, 00:10:14.360 "zcopy": true, 00:10:14.360 "get_zone_info": false, 00:10:14.360 "zone_management": false, 00:10:14.360 "zone_append": false, 00:10:14.360 "compare": false, 00:10:14.360 "compare_and_write": false, 00:10:14.360 "abort": true, 00:10:14.360 "seek_hole": false, 00:10:14.360 "seek_data": false, 00:10:14.360 "copy": true, 00:10:14.360 "nvme_iov_md": false 00:10:14.360 }, 00:10:14.360 "memory_domains": [ 00:10:14.360 { 00:10:14.360 "dma_device_id": "system", 00:10:14.360 "dma_device_type": 1 00:10:14.360 }, 00:10:14.360 { 00:10:14.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.360 "dma_device_type": 2 00:10:14.360 } 00:10:14.360 ], 00:10:14.360 "driver_specific": {} 00:10:14.360 } 00:10:14.360 ] 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.360 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.618 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.618 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.618 "name": "Existed_Raid", 00:10:14.618 "uuid": "51d1e315-fd55-41c7-a6a8-7af293e2cbc6", 00:10:14.618 "strip_size_kb": 0, 00:10:14.618 "state": "online", 00:10:14.618 "raid_level": "raid1", 00:10:14.618 "superblock": false, 00:10:14.618 "num_base_bdevs": 3, 00:10:14.618 "num_base_bdevs_discovered": 3, 00:10:14.618 "num_base_bdevs_operational": 3, 00:10:14.618 "base_bdevs_list": [ 00:10:14.618 { 00:10:14.618 "name": "BaseBdev1", 00:10:14.618 "uuid": "6ffc28ba-d7ef-482b-a39b-6b80f06d3752", 00:10:14.618 "is_configured": true, 00:10:14.618 "data_offset": 0, 00:10:14.618 "data_size": 65536 00:10:14.618 }, 00:10:14.618 { 00:10:14.618 "name": "BaseBdev2", 00:10:14.618 "uuid": "7c733277-6de3-4b56-8971-63b4947c8764", 00:10:14.618 "is_configured": true, 00:10:14.618 "data_offset": 0, 00:10:14.618 "data_size": 65536 00:10:14.618 }, 00:10:14.618 { 00:10:14.618 "name": "BaseBdev3", 00:10:14.618 "uuid": "42a6befd-a1f6-414e-a2c4-96c7fbda79d8", 00:10:14.618 "is_configured": true, 00:10:14.618 "data_offset": 0, 00:10:14.618 "data_size": 65536 00:10:14.618 } 00:10:14.618 ] 00:10:14.618 }' 00:10:14.618 08:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.618 08:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.876 [2024-12-13 08:21:27.175969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.876 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.876 "name": "Existed_Raid", 00:10:14.876 "aliases": [ 00:10:14.876 "51d1e315-fd55-41c7-a6a8-7af293e2cbc6" 00:10:14.876 ], 00:10:14.876 "product_name": "Raid Volume", 00:10:14.876 "block_size": 512, 00:10:14.876 "num_blocks": 65536, 00:10:14.876 "uuid": "51d1e315-fd55-41c7-a6a8-7af293e2cbc6", 00:10:14.876 "assigned_rate_limits": { 00:10:14.876 "rw_ios_per_sec": 0, 00:10:14.876 "rw_mbytes_per_sec": 0, 00:10:14.876 "r_mbytes_per_sec": 0, 00:10:14.876 "w_mbytes_per_sec": 0 00:10:14.876 }, 00:10:14.876 "claimed": false, 00:10:14.876 "zoned": false, 00:10:14.876 "supported_io_types": { 00:10:14.876 "read": true, 00:10:14.876 "write": true, 00:10:14.876 "unmap": false, 00:10:14.876 "flush": false, 00:10:14.876 "reset": true, 00:10:14.877 "nvme_admin": false, 00:10:14.877 "nvme_io": false, 00:10:14.877 "nvme_io_md": false, 00:10:14.877 "write_zeroes": true, 00:10:14.877 "zcopy": false, 00:10:14.877 "get_zone_info": false, 00:10:14.877 "zone_management": false, 00:10:14.877 "zone_append": false, 00:10:14.877 "compare": false, 00:10:14.877 "compare_and_write": false, 00:10:14.877 "abort": false, 00:10:14.877 "seek_hole": false, 00:10:14.877 "seek_data": false, 00:10:14.877 "copy": false, 00:10:14.877 "nvme_iov_md": false 00:10:14.877 }, 00:10:14.877 "memory_domains": [ 00:10:14.877 { 00:10:14.877 "dma_device_id": "system", 00:10:14.877 "dma_device_type": 1 00:10:14.877 }, 00:10:14.877 { 00:10:14.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.877 "dma_device_type": 2 00:10:14.877 }, 00:10:14.877 { 00:10:14.877 "dma_device_id": "system", 00:10:14.877 "dma_device_type": 1 00:10:14.877 }, 00:10:14.877 { 00:10:14.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.877 "dma_device_type": 2 00:10:14.877 }, 00:10:14.877 { 00:10:14.877 "dma_device_id": "system", 00:10:14.877 "dma_device_type": 1 00:10:14.877 }, 00:10:14.877 { 00:10:14.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.877 "dma_device_type": 2 00:10:14.877 } 00:10:14.877 ], 00:10:14.877 "driver_specific": { 00:10:14.877 "raid": { 00:10:14.877 "uuid": "51d1e315-fd55-41c7-a6a8-7af293e2cbc6", 00:10:14.877 "strip_size_kb": 0, 00:10:14.877 "state": "online", 00:10:14.877 "raid_level": "raid1", 00:10:14.877 "superblock": false, 00:10:14.877 "num_base_bdevs": 3, 00:10:14.877 "num_base_bdevs_discovered": 3, 00:10:14.877 "num_base_bdevs_operational": 3, 00:10:14.877 "base_bdevs_list": [ 00:10:14.877 { 00:10:14.877 "name": "BaseBdev1", 00:10:14.877 "uuid": "6ffc28ba-d7ef-482b-a39b-6b80f06d3752", 00:10:14.877 "is_configured": true, 00:10:14.877 "data_offset": 0, 00:10:14.877 "data_size": 65536 00:10:14.877 }, 00:10:14.877 { 00:10:14.877 "name": "BaseBdev2", 00:10:14.877 "uuid": "7c733277-6de3-4b56-8971-63b4947c8764", 00:10:14.877 "is_configured": true, 00:10:14.877 "data_offset": 0, 00:10:14.877 "data_size": 65536 00:10:14.877 }, 00:10:14.877 { 00:10:14.877 "name": "BaseBdev3", 00:10:14.877 "uuid": "42a6befd-a1f6-414e-a2c4-96c7fbda79d8", 00:10:14.877 "is_configured": true, 00:10:14.877 "data_offset": 0, 00:10:14.877 "data_size": 65536 00:10:14.877 } 00:10:14.877 ] 00:10:14.877 } 00:10:14.877 } 00:10:14.877 }' 00:10:14.877 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.136 BaseBdev2 00:10:15.136 BaseBdev3' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.136 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.136 [2024-12-13 08:21:27.439240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.395 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.396 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.396 "name": "Existed_Raid", 00:10:15.396 "uuid": "51d1e315-fd55-41c7-a6a8-7af293e2cbc6", 00:10:15.396 "strip_size_kb": 0, 00:10:15.396 "state": "online", 00:10:15.396 "raid_level": "raid1", 00:10:15.396 "superblock": false, 00:10:15.396 "num_base_bdevs": 3, 00:10:15.396 "num_base_bdevs_discovered": 2, 00:10:15.396 "num_base_bdevs_operational": 2, 00:10:15.396 "base_bdevs_list": [ 00:10:15.396 { 00:10:15.396 "name": null, 00:10:15.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.396 "is_configured": false, 00:10:15.396 "data_offset": 0, 00:10:15.396 "data_size": 65536 00:10:15.396 }, 00:10:15.396 { 00:10:15.396 "name": "BaseBdev2", 00:10:15.396 "uuid": "7c733277-6de3-4b56-8971-63b4947c8764", 00:10:15.396 "is_configured": true, 00:10:15.396 "data_offset": 0, 00:10:15.396 "data_size": 65536 00:10:15.396 }, 00:10:15.396 { 00:10:15.396 "name": "BaseBdev3", 00:10:15.396 "uuid": "42a6befd-a1f6-414e-a2c4-96c7fbda79d8", 00:10:15.396 "is_configured": true, 00:10:15.396 "data_offset": 0, 00:10:15.396 "data_size": 65536 00:10:15.396 } 00:10:15.396 ] 00:10:15.396 }' 00:10:15.396 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.396 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.654 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:15.654 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.654 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.654 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.654 08:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.654 08:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.654 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.914 [2024-12-13 08:21:28.054040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.914 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.914 [2024-12-13 08:21:28.211211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.914 [2024-12-13 08:21:28.211310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.174 [2024-12-13 08:21:28.307407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.174 [2024-12-13 08:21:28.307473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.174 [2024-12-13 08:21:28.307501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.174 BaseBdev2 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.174 [ 00:10:16.174 { 00:10:16.174 "name": "BaseBdev2", 00:10:16.174 "aliases": [ 00:10:16.174 "d038b9b4-84a8-4254-acfa-a94cd42a8dba" 00:10:16.174 ], 00:10:16.174 "product_name": "Malloc disk", 00:10:16.174 "block_size": 512, 00:10:16.174 "num_blocks": 65536, 00:10:16.174 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:16.174 "assigned_rate_limits": { 00:10:16.174 "rw_ios_per_sec": 0, 00:10:16.174 "rw_mbytes_per_sec": 0, 00:10:16.174 "r_mbytes_per_sec": 0, 00:10:16.174 "w_mbytes_per_sec": 0 00:10:16.174 }, 00:10:16.174 "claimed": false, 00:10:16.174 "zoned": false, 00:10:16.174 "supported_io_types": { 00:10:16.174 "read": true, 00:10:16.174 "write": true, 00:10:16.174 "unmap": true, 00:10:16.174 "flush": true, 00:10:16.174 "reset": true, 00:10:16.174 "nvme_admin": false, 00:10:16.174 "nvme_io": false, 00:10:16.174 "nvme_io_md": false, 00:10:16.174 "write_zeroes": true, 00:10:16.174 "zcopy": true, 00:10:16.174 "get_zone_info": false, 00:10:16.174 "zone_management": false, 00:10:16.174 "zone_append": false, 00:10:16.174 "compare": false, 00:10:16.174 "compare_and_write": false, 00:10:16.174 "abort": true, 00:10:16.174 "seek_hole": false, 00:10:16.174 "seek_data": false, 00:10:16.174 "copy": true, 00:10:16.174 "nvme_iov_md": false 00:10:16.174 }, 00:10:16.174 "memory_domains": [ 00:10:16.174 { 00:10:16.174 "dma_device_id": "system", 00:10:16.174 "dma_device_type": 1 00:10:16.174 }, 00:10:16.174 { 00:10:16.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.174 "dma_device_type": 2 00:10:16.174 } 00:10:16.174 ], 00:10:16.174 "driver_specific": {} 00:10:16.174 } 00:10:16.174 ] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.174 BaseBdev3 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.174 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.174 [ 00:10:16.174 { 00:10:16.174 "name": "BaseBdev3", 00:10:16.174 "aliases": [ 00:10:16.174 "cecb2e42-c518-4869-8067-e85881a5f5f5" 00:10:16.174 ], 00:10:16.174 "product_name": "Malloc disk", 00:10:16.174 "block_size": 512, 00:10:16.174 "num_blocks": 65536, 00:10:16.175 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:16.175 "assigned_rate_limits": { 00:10:16.175 "rw_ios_per_sec": 0, 00:10:16.175 "rw_mbytes_per_sec": 0, 00:10:16.175 "r_mbytes_per_sec": 0, 00:10:16.175 "w_mbytes_per_sec": 0 00:10:16.175 }, 00:10:16.175 "claimed": false, 00:10:16.175 "zoned": false, 00:10:16.175 "supported_io_types": { 00:10:16.175 "read": true, 00:10:16.175 "write": true, 00:10:16.175 "unmap": true, 00:10:16.175 "flush": true, 00:10:16.175 "reset": true, 00:10:16.175 "nvme_admin": false, 00:10:16.175 "nvme_io": false, 00:10:16.175 "nvme_io_md": false, 00:10:16.175 "write_zeroes": true, 00:10:16.175 "zcopy": true, 00:10:16.175 "get_zone_info": false, 00:10:16.175 "zone_management": false, 00:10:16.175 "zone_append": false, 00:10:16.175 "compare": false, 00:10:16.175 "compare_and_write": false, 00:10:16.175 "abort": true, 00:10:16.175 "seek_hole": false, 00:10:16.175 "seek_data": false, 00:10:16.175 "copy": true, 00:10:16.175 "nvme_iov_md": false 00:10:16.175 }, 00:10:16.175 "memory_domains": [ 00:10:16.175 { 00:10:16.175 "dma_device_id": "system", 00:10:16.175 "dma_device_type": 1 00:10:16.175 }, 00:10:16.175 { 00:10:16.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.175 "dma_device_type": 2 00:10:16.175 } 00:10:16.175 ], 00:10:16.175 "driver_specific": {} 00:10:16.175 } 00:10:16.175 ] 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.175 [2024-12-13 08:21:28.522585] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.175 [2024-12-13 08:21:28.522637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.175 [2024-12-13 08:21:28.522661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.175 [2024-12-13 08:21:28.524663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.175 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.434 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.434 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.434 "name": "Existed_Raid", 00:10:16.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.434 "strip_size_kb": 0, 00:10:16.434 "state": "configuring", 00:10:16.434 "raid_level": "raid1", 00:10:16.434 "superblock": false, 00:10:16.434 "num_base_bdevs": 3, 00:10:16.434 "num_base_bdevs_discovered": 2, 00:10:16.434 "num_base_bdevs_operational": 3, 00:10:16.434 "base_bdevs_list": [ 00:10:16.434 { 00:10:16.434 "name": "BaseBdev1", 00:10:16.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.434 "is_configured": false, 00:10:16.434 "data_offset": 0, 00:10:16.434 "data_size": 0 00:10:16.434 }, 00:10:16.434 { 00:10:16.434 "name": "BaseBdev2", 00:10:16.434 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:16.434 "is_configured": true, 00:10:16.434 "data_offset": 0, 00:10:16.434 "data_size": 65536 00:10:16.434 }, 00:10:16.434 { 00:10:16.434 "name": "BaseBdev3", 00:10:16.434 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:16.434 "is_configured": true, 00:10:16.434 "data_offset": 0, 00:10:16.434 "data_size": 65536 00:10:16.434 } 00:10:16.434 ] 00:10:16.434 }' 00:10:16.434 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.434 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.693 08:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:16.693 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.693 08:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.693 [2024-12-13 08:21:29.001772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.693 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.952 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.952 "name": "Existed_Raid", 00:10:16.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.952 "strip_size_kb": 0, 00:10:16.952 "state": "configuring", 00:10:16.952 "raid_level": "raid1", 00:10:16.952 "superblock": false, 00:10:16.952 "num_base_bdevs": 3, 00:10:16.952 "num_base_bdevs_discovered": 1, 00:10:16.952 "num_base_bdevs_operational": 3, 00:10:16.952 "base_bdevs_list": [ 00:10:16.952 { 00:10:16.952 "name": "BaseBdev1", 00:10:16.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.952 "is_configured": false, 00:10:16.952 "data_offset": 0, 00:10:16.952 "data_size": 0 00:10:16.952 }, 00:10:16.952 { 00:10:16.952 "name": null, 00:10:16.952 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:16.952 "is_configured": false, 00:10:16.952 "data_offset": 0, 00:10:16.952 "data_size": 65536 00:10:16.952 }, 00:10:16.952 { 00:10:16.952 "name": "BaseBdev3", 00:10:16.952 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:16.952 "is_configured": true, 00:10:16.952 "data_offset": 0, 00:10:16.952 "data_size": 65536 00:10:16.952 } 00:10:16.952 ] 00:10:16.952 }' 00:10:16.952 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.952 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.211 [2024-12-13 08:21:29.549283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.211 BaseBdev1 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.211 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.211 [ 00:10:17.211 { 00:10:17.211 "name": "BaseBdev1", 00:10:17.211 "aliases": [ 00:10:17.211 "ed6ba59e-2320-4321-a8d2-800e3d6ddc38" 00:10:17.211 ], 00:10:17.471 "product_name": "Malloc disk", 00:10:17.471 "block_size": 512, 00:10:17.471 "num_blocks": 65536, 00:10:17.471 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:17.471 "assigned_rate_limits": { 00:10:17.471 "rw_ios_per_sec": 0, 00:10:17.471 "rw_mbytes_per_sec": 0, 00:10:17.471 "r_mbytes_per_sec": 0, 00:10:17.471 "w_mbytes_per_sec": 0 00:10:17.471 }, 00:10:17.471 "claimed": true, 00:10:17.471 "claim_type": "exclusive_write", 00:10:17.471 "zoned": false, 00:10:17.471 "supported_io_types": { 00:10:17.471 "read": true, 00:10:17.471 "write": true, 00:10:17.471 "unmap": true, 00:10:17.471 "flush": true, 00:10:17.471 "reset": true, 00:10:17.471 "nvme_admin": false, 00:10:17.471 "nvme_io": false, 00:10:17.471 "nvme_io_md": false, 00:10:17.471 "write_zeroes": true, 00:10:17.471 "zcopy": true, 00:10:17.471 "get_zone_info": false, 00:10:17.471 "zone_management": false, 00:10:17.471 "zone_append": false, 00:10:17.471 "compare": false, 00:10:17.471 "compare_and_write": false, 00:10:17.471 "abort": true, 00:10:17.471 "seek_hole": false, 00:10:17.471 "seek_data": false, 00:10:17.471 "copy": true, 00:10:17.471 "nvme_iov_md": false 00:10:17.471 }, 00:10:17.471 "memory_domains": [ 00:10:17.471 { 00:10:17.471 "dma_device_id": "system", 00:10:17.471 "dma_device_type": 1 00:10:17.471 }, 00:10:17.471 { 00:10:17.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.471 "dma_device_type": 2 00:10:17.471 } 00:10:17.471 ], 00:10:17.471 "driver_specific": {} 00:10:17.471 } 00:10:17.471 ] 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.471 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.471 "name": "Existed_Raid", 00:10:17.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.471 "strip_size_kb": 0, 00:10:17.471 "state": "configuring", 00:10:17.471 "raid_level": "raid1", 00:10:17.471 "superblock": false, 00:10:17.471 "num_base_bdevs": 3, 00:10:17.471 "num_base_bdevs_discovered": 2, 00:10:17.471 "num_base_bdevs_operational": 3, 00:10:17.471 "base_bdevs_list": [ 00:10:17.471 { 00:10:17.471 "name": "BaseBdev1", 00:10:17.471 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:17.471 "is_configured": true, 00:10:17.471 "data_offset": 0, 00:10:17.471 "data_size": 65536 00:10:17.471 }, 00:10:17.471 { 00:10:17.471 "name": null, 00:10:17.472 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:17.472 "is_configured": false, 00:10:17.472 "data_offset": 0, 00:10:17.472 "data_size": 65536 00:10:17.472 }, 00:10:17.472 { 00:10:17.472 "name": "BaseBdev3", 00:10:17.472 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:17.472 "is_configured": true, 00:10:17.472 "data_offset": 0, 00:10:17.472 "data_size": 65536 00:10:17.472 } 00:10:17.472 ] 00:10:17.472 }' 00:10:17.472 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.472 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.731 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:17.731 08:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.731 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.731 08:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.731 [2024-12-13 08:21:30.024570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.731 "name": "Existed_Raid", 00:10:17.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.731 "strip_size_kb": 0, 00:10:17.731 "state": "configuring", 00:10:17.731 "raid_level": "raid1", 00:10:17.731 "superblock": false, 00:10:17.731 "num_base_bdevs": 3, 00:10:17.731 "num_base_bdevs_discovered": 1, 00:10:17.731 "num_base_bdevs_operational": 3, 00:10:17.731 "base_bdevs_list": [ 00:10:17.731 { 00:10:17.731 "name": "BaseBdev1", 00:10:17.731 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:17.731 "is_configured": true, 00:10:17.731 "data_offset": 0, 00:10:17.731 "data_size": 65536 00:10:17.731 }, 00:10:17.731 { 00:10:17.731 "name": null, 00:10:17.731 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:17.731 "is_configured": false, 00:10:17.731 "data_offset": 0, 00:10:17.731 "data_size": 65536 00:10:17.731 }, 00:10:17.731 { 00:10:17.731 "name": null, 00:10:17.731 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:17.731 "is_configured": false, 00:10:17.731 "data_offset": 0, 00:10:17.731 "data_size": 65536 00:10:17.731 } 00:10:17.731 ] 00:10:17.731 }' 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.731 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.298 [2024-12-13 08:21:30.527743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.298 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.299 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.299 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.299 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.299 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.299 "name": "Existed_Raid", 00:10:18.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.299 "strip_size_kb": 0, 00:10:18.299 "state": "configuring", 00:10:18.299 "raid_level": "raid1", 00:10:18.299 "superblock": false, 00:10:18.299 "num_base_bdevs": 3, 00:10:18.299 "num_base_bdevs_discovered": 2, 00:10:18.299 "num_base_bdevs_operational": 3, 00:10:18.299 "base_bdevs_list": [ 00:10:18.299 { 00:10:18.299 "name": "BaseBdev1", 00:10:18.299 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:18.299 "is_configured": true, 00:10:18.299 "data_offset": 0, 00:10:18.299 "data_size": 65536 00:10:18.299 }, 00:10:18.299 { 00:10:18.299 "name": null, 00:10:18.299 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:18.299 "is_configured": false, 00:10:18.299 "data_offset": 0, 00:10:18.299 "data_size": 65536 00:10:18.299 }, 00:10:18.299 { 00:10:18.299 "name": "BaseBdev3", 00:10:18.299 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:18.299 "is_configured": true, 00:10:18.299 "data_offset": 0, 00:10:18.299 "data_size": 65536 00:10:18.299 } 00:10:18.299 ] 00:10:18.299 }' 00:10:18.299 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.299 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.866 08:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.866 [2024-12-13 08:21:30.978980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.866 "name": "Existed_Raid", 00:10:18.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.866 "strip_size_kb": 0, 00:10:18.866 "state": "configuring", 00:10:18.866 "raid_level": "raid1", 00:10:18.866 "superblock": false, 00:10:18.866 "num_base_bdevs": 3, 00:10:18.866 "num_base_bdevs_discovered": 1, 00:10:18.866 "num_base_bdevs_operational": 3, 00:10:18.866 "base_bdevs_list": [ 00:10:18.866 { 00:10:18.866 "name": null, 00:10:18.866 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:18.866 "is_configured": false, 00:10:18.866 "data_offset": 0, 00:10:18.866 "data_size": 65536 00:10:18.866 }, 00:10:18.866 { 00:10:18.866 "name": null, 00:10:18.866 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:18.866 "is_configured": false, 00:10:18.866 "data_offset": 0, 00:10:18.866 "data_size": 65536 00:10:18.866 }, 00:10:18.866 { 00:10:18.866 "name": "BaseBdev3", 00:10:18.866 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:18.866 "is_configured": true, 00:10:18.866 "data_offset": 0, 00:10:18.866 "data_size": 65536 00:10:18.866 } 00:10:18.866 ] 00:10:18.866 }' 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.866 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.126 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.126 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.126 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.126 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 [2024-12-13 08:21:31.527049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.384 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.384 "name": "Existed_Raid", 00:10:19.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.384 "strip_size_kb": 0, 00:10:19.384 "state": "configuring", 00:10:19.384 "raid_level": "raid1", 00:10:19.384 "superblock": false, 00:10:19.384 "num_base_bdevs": 3, 00:10:19.384 "num_base_bdevs_discovered": 2, 00:10:19.384 "num_base_bdevs_operational": 3, 00:10:19.384 "base_bdevs_list": [ 00:10:19.384 { 00:10:19.384 "name": null, 00:10:19.384 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:19.384 "is_configured": false, 00:10:19.385 "data_offset": 0, 00:10:19.385 "data_size": 65536 00:10:19.385 }, 00:10:19.385 { 00:10:19.385 "name": "BaseBdev2", 00:10:19.385 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:19.385 "is_configured": true, 00:10:19.385 "data_offset": 0, 00:10:19.385 "data_size": 65536 00:10:19.385 }, 00:10:19.385 { 00:10:19.385 "name": "BaseBdev3", 00:10:19.385 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:19.385 "is_configured": true, 00:10:19.385 "data_offset": 0, 00:10:19.385 "data_size": 65536 00:10:19.385 } 00:10:19.385 ] 00:10:19.385 }' 00:10:19.385 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.385 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:19.643 08:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ed6ba59e-2320-4321-a8d2-800e3d6ddc38 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 [2024-12-13 08:21:32.074239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:19.903 [2024-12-13 08:21:32.074300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.903 [2024-12-13 08:21:32.074308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:19.903 [2024-12-13 08:21:32.074576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:19.903 [2024-12-13 08:21:32.074726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.903 [2024-12-13 08:21:32.074760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:19.903 [2024-12-13 08:21:32.075029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.903 NewBaseBdev 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 [ 00:10:19.903 { 00:10:19.903 "name": "NewBaseBdev", 00:10:19.903 "aliases": [ 00:10:19.903 "ed6ba59e-2320-4321-a8d2-800e3d6ddc38" 00:10:19.903 ], 00:10:19.903 "product_name": "Malloc disk", 00:10:19.903 "block_size": 512, 00:10:19.903 "num_blocks": 65536, 00:10:19.903 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:19.903 "assigned_rate_limits": { 00:10:19.903 "rw_ios_per_sec": 0, 00:10:19.903 "rw_mbytes_per_sec": 0, 00:10:19.903 "r_mbytes_per_sec": 0, 00:10:19.903 "w_mbytes_per_sec": 0 00:10:19.903 }, 00:10:19.903 "claimed": true, 00:10:19.903 "claim_type": "exclusive_write", 00:10:19.903 "zoned": false, 00:10:19.903 "supported_io_types": { 00:10:19.903 "read": true, 00:10:19.903 "write": true, 00:10:19.903 "unmap": true, 00:10:19.903 "flush": true, 00:10:19.903 "reset": true, 00:10:19.903 "nvme_admin": false, 00:10:19.903 "nvme_io": false, 00:10:19.903 "nvme_io_md": false, 00:10:19.903 "write_zeroes": true, 00:10:19.903 "zcopy": true, 00:10:19.903 "get_zone_info": false, 00:10:19.903 "zone_management": false, 00:10:19.903 "zone_append": false, 00:10:19.903 "compare": false, 00:10:19.903 "compare_and_write": false, 00:10:19.903 "abort": true, 00:10:19.903 "seek_hole": false, 00:10:19.903 "seek_data": false, 00:10:19.903 "copy": true, 00:10:19.903 "nvme_iov_md": false 00:10:19.903 }, 00:10:19.903 "memory_domains": [ 00:10:19.903 { 00:10:19.903 "dma_device_id": "system", 00:10:19.903 "dma_device_type": 1 00:10:19.903 }, 00:10:19.903 { 00:10:19.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.903 "dma_device_type": 2 00:10:19.903 } 00:10:19.903 ], 00:10:19.903 "driver_specific": {} 00:10:19.903 } 00:10:19.903 ] 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.903 "name": "Existed_Raid", 00:10:19.903 "uuid": "b79910b1-a6e5-47b8-ac65-2f7a63104feb", 00:10:19.903 "strip_size_kb": 0, 00:10:19.903 "state": "online", 00:10:19.903 "raid_level": "raid1", 00:10:19.903 "superblock": false, 00:10:19.903 "num_base_bdevs": 3, 00:10:19.903 "num_base_bdevs_discovered": 3, 00:10:19.903 "num_base_bdevs_operational": 3, 00:10:19.903 "base_bdevs_list": [ 00:10:19.903 { 00:10:19.903 "name": "NewBaseBdev", 00:10:19.903 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:19.903 "is_configured": true, 00:10:19.903 "data_offset": 0, 00:10:19.903 "data_size": 65536 00:10:19.903 }, 00:10:19.903 { 00:10:19.903 "name": "BaseBdev2", 00:10:19.903 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:19.903 "is_configured": true, 00:10:19.903 "data_offset": 0, 00:10:19.903 "data_size": 65536 00:10:19.903 }, 00:10:19.903 { 00:10:19.903 "name": "BaseBdev3", 00:10:19.903 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:19.903 "is_configured": true, 00:10:19.903 "data_offset": 0, 00:10:19.903 "data_size": 65536 00:10:19.903 } 00:10:19.903 ] 00:10:19.903 }' 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.903 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.470 [2024-12-13 08:21:32.585741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.470 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.470 "name": "Existed_Raid", 00:10:20.470 "aliases": [ 00:10:20.470 "b79910b1-a6e5-47b8-ac65-2f7a63104feb" 00:10:20.470 ], 00:10:20.470 "product_name": "Raid Volume", 00:10:20.470 "block_size": 512, 00:10:20.470 "num_blocks": 65536, 00:10:20.470 "uuid": "b79910b1-a6e5-47b8-ac65-2f7a63104feb", 00:10:20.470 "assigned_rate_limits": { 00:10:20.470 "rw_ios_per_sec": 0, 00:10:20.470 "rw_mbytes_per_sec": 0, 00:10:20.470 "r_mbytes_per_sec": 0, 00:10:20.470 "w_mbytes_per_sec": 0 00:10:20.470 }, 00:10:20.470 "claimed": false, 00:10:20.470 "zoned": false, 00:10:20.470 "supported_io_types": { 00:10:20.470 "read": true, 00:10:20.470 "write": true, 00:10:20.470 "unmap": false, 00:10:20.470 "flush": false, 00:10:20.470 "reset": true, 00:10:20.470 "nvme_admin": false, 00:10:20.470 "nvme_io": false, 00:10:20.470 "nvme_io_md": false, 00:10:20.470 "write_zeroes": true, 00:10:20.470 "zcopy": false, 00:10:20.470 "get_zone_info": false, 00:10:20.470 "zone_management": false, 00:10:20.470 "zone_append": false, 00:10:20.470 "compare": false, 00:10:20.470 "compare_and_write": false, 00:10:20.470 "abort": false, 00:10:20.470 "seek_hole": false, 00:10:20.470 "seek_data": false, 00:10:20.470 "copy": false, 00:10:20.470 "nvme_iov_md": false 00:10:20.470 }, 00:10:20.470 "memory_domains": [ 00:10:20.470 { 00:10:20.470 "dma_device_id": "system", 00:10:20.470 "dma_device_type": 1 00:10:20.470 }, 00:10:20.470 { 00:10:20.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.470 "dma_device_type": 2 00:10:20.470 }, 00:10:20.470 { 00:10:20.470 "dma_device_id": "system", 00:10:20.471 "dma_device_type": 1 00:10:20.471 }, 00:10:20.471 { 00:10:20.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.471 "dma_device_type": 2 00:10:20.471 }, 00:10:20.471 { 00:10:20.471 "dma_device_id": "system", 00:10:20.471 "dma_device_type": 1 00:10:20.471 }, 00:10:20.471 { 00:10:20.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.471 "dma_device_type": 2 00:10:20.471 } 00:10:20.471 ], 00:10:20.471 "driver_specific": { 00:10:20.471 "raid": { 00:10:20.471 "uuid": "b79910b1-a6e5-47b8-ac65-2f7a63104feb", 00:10:20.471 "strip_size_kb": 0, 00:10:20.471 "state": "online", 00:10:20.471 "raid_level": "raid1", 00:10:20.471 "superblock": false, 00:10:20.471 "num_base_bdevs": 3, 00:10:20.471 "num_base_bdevs_discovered": 3, 00:10:20.471 "num_base_bdevs_operational": 3, 00:10:20.471 "base_bdevs_list": [ 00:10:20.471 { 00:10:20.471 "name": "NewBaseBdev", 00:10:20.471 "uuid": "ed6ba59e-2320-4321-a8d2-800e3d6ddc38", 00:10:20.471 "is_configured": true, 00:10:20.471 "data_offset": 0, 00:10:20.471 "data_size": 65536 00:10:20.471 }, 00:10:20.471 { 00:10:20.471 "name": "BaseBdev2", 00:10:20.471 "uuid": "d038b9b4-84a8-4254-acfa-a94cd42a8dba", 00:10:20.471 "is_configured": true, 00:10:20.471 "data_offset": 0, 00:10:20.471 "data_size": 65536 00:10:20.471 }, 00:10:20.471 { 00:10:20.471 "name": "BaseBdev3", 00:10:20.471 "uuid": "cecb2e42-c518-4869-8067-e85881a5f5f5", 00:10:20.471 "is_configured": true, 00:10:20.471 "data_offset": 0, 00:10:20.471 "data_size": 65536 00:10:20.471 } 00:10:20.471 ] 00:10:20.471 } 00:10:20.471 } 00:10:20.471 }' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:20.471 BaseBdev2 00:10:20.471 BaseBdev3' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.471 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.731 [2024-12-13 08:21:32.837019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.731 [2024-12-13 08:21:32.837055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.731 [2024-12-13 08:21:32.837170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.731 [2024-12-13 08:21:32.837470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.731 [2024-12-13 08:21:32.837484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67554 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67554 ']' 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67554 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67554 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.731 killing process with pid 67554 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67554' 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67554 00:10:20.731 [2024-12-13 08:21:32.881471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.731 08:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67554 00:10:20.990 [2024-12-13 08:21:33.193980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:22.379 00:10:22.379 real 0m10.662s 00:10:22.379 user 0m16.934s 00:10:22.379 sys 0m1.876s 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.379 ************************************ 00:10:22.379 END TEST raid_state_function_test 00:10:22.379 ************************************ 00:10:22.379 08:21:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:22.379 08:21:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:22.379 08:21:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.379 08:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.379 ************************************ 00:10:22.379 START TEST raid_state_function_test_sb 00:10:22.379 ************************************ 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68181 00:10:22.379 Process raid pid: 68181 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68181' 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68181 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68181 ']' 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.379 08:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.379 [2024-12-13 08:21:34.486980] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:22.379 [2024-12-13 08:21:34.487131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.379 [2024-12-13 08:21:34.665622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.639 [2024-12-13 08:21:34.783117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.639 [2024-12-13 08:21:34.988541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.639 [2024-12-13 08:21:34.988590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.207 [2024-12-13 08:21:35.362221] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.207 [2024-12-13 08:21:35.362275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.207 [2024-12-13 08:21:35.362290] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.207 [2024-12-13 08:21:35.362301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.207 [2024-12-13 08:21:35.362307] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.207 [2024-12-13 08:21:35.362316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.207 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.207 "name": "Existed_Raid", 00:10:23.207 "uuid": "d84f2ffa-7433-4356-b611-3345be69db9c", 00:10:23.207 "strip_size_kb": 0, 00:10:23.207 "state": "configuring", 00:10:23.207 "raid_level": "raid1", 00:10:23.207 "superblock": true, 00:10:23.207 "num_base_bdevs": 3, 00:10:23.207 "num_base_bdevs_discovered": 0, 00:10:23.207 "num_base_bdevs_operational": 3, 00:10:23.207 "base_bdevs_list": [ 00:10:23.207 { 00:10:23.207 "name": "BaseBdev1", 00:10:23.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.207 "is_configured": false, 00:10:23.207 "data_offset": 0, 00:10:23.207 "data_size": 0 00:10:23.207 }, 00:10:23.207 { 00:10:23.207 "name": "BaseBdev2", 00:10:23.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.207 "is_configured": false, 00:10:23.208 "data_offset": 0, 00:10:23.208 "data_size": 0 00:10:23.208 }, 00:10:23.208 { 00:10:23.208 "name": "BaseBdev3", 00:10:23.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.208 "is_configured": false, 00:10:23.208 "data_offset": 0, 00:10:23.208 "data_size": 0 00:10:23.208 } 00:10:23.208 ] 00:10:23.208 }' 00:10:23.208 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.208 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.466 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.466 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.466 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.467 [2024-12-13 08:21:35.813372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.467 [2024-12-13 08:21:35.813412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.467 [2024-12-13 08:21:35.825358] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.467 [2024-12-13 08:21:35.825421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.467 [2024-12-13 08:21:35.825430] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.467 [2024-12-13 08:21:35.825441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.467 [2024-12-13 08:21:35.825447] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.467 [2024-12-13 08:21:35.825456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.467 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.726 [2024-12-13 08:21:35.873186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.726 BaseBdev1 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.726 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.726 [ 00:10:23.726 { 00:10:23.726 "name": "BaseBdev1", 00:10:23.726 "aliases": [ 00:10:23.726 "aa20834e-36d7-409b-b16a-1b256c05fa3a" 00:10:23.726 ], 00:10:23.726 "product_name": "Malloc disk", 00:10:23.726 "block_size": 512, 00:10:23.726 "num_blocks": 65536, 00:10:23.726 "uuid": "aa20834e-36d7-409b-b16a-1b256c05fa3a", 00:10:23.726 "assigned_rate_limits": { 00:10:23.726 "rw_ios_per_sec": 0, 00:10:23.726 "rw_mbytes_per_sec": 0, 00:10:23.726 "r_mbytes_per_sec": 0, 00:10:23.726 "w_mbytes_per_sec": 0 00:10:23.726 }, 00:10:23.726 "claimed": true, 00:10:23.726 "claim_type": "exclusive_write", 00:10:23.726 "zoned": false, 00:10:23.726 "supported_io_types": { 00:10:23.726 "read": true, 00:10:23.726 "write": true, 00:10:23.726 "unmap": true, 00:10:23.726 "flush": true, 00:10:23.726 "reset": true, 00:10:23.726 "nvme_admin": false, 00:10:23.726 "nvme_io": false, 00:10:23.726 "nvme_io_md": false, 00:10:23.727 "write_zeroes": true, 00:10:23.727 "zcopy": true, 00:10:23.727 "get_zone_info": false, 00:10:23.727 "zone_management": false, 00:10:23.727 "zone_append": false, 00:10:23.727 "compare": false, 00:10:23.727 "compare_and_write": false, 00:10:23.727 "abort": true, 00:10:23.727 "seek_hole": false, 00:10:23.727 "seek_data": false, 00:10:23.727 "copy": true, 00:10:23.727 "nvme_iov_md": false 00:10:23.727 }, 00:10:23.727 "memory_domains": [ 00:10:23.727 { 00:10:23.727 "dma_device_id": "system", 00:10:23.727 "dma_device_type": 1 00:10:23.727 }, 00:10:23.727 { 00:10:23.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.727 "dma_device_type": 2 00:10:23.727 } 00:10:23.727 ], 00:10:23.727 "driver_specific": {} 00:10:23.727 } 00:10:23.727 ] 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.727 "name": "Existed_Raid", 00:10:23.727 "uuid": "a48443d2-78b3-4ee9-b696-47ee7e9022d2", 00:10:23.727 "strip_size_kb": 0, 00:10:23.727 "state": "configuring", 00:10:23.727 "raid_level": "raid1", 00:10:23.727 "superblock": true, 00:10:23.727 "num_base_bdevs": 3, 00:10:23.727 "num_base_bdevs_discovered": 1, 00:10:23.727 "num_base_bdevs_operational": 3, 00:10:23.727 "base_bdevs_list": [ 00:10:23.727 { 00:10:23.727 "name": "BaseBdev1", 00:10:23.727 "uuid": "aa20834e-36d7-409b-b16a-1b256c05fa3a", 00:10:23.727 "is_configured": true, 00:10:23.727 "data_offset": 2048, 00:10:23.727 "data_size": 63488 00:10:23.727 }, 00:10:23.727 { 00:10:23.727 "name": "BaseBdev2", 00:10:23.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.727 "is_configured": false, 00:10:23.727 "data_offset": 0, 00:10:23.727 "data_size": 0 00:10:23.727 }, 00:10:23.727 { 00:10:23.727 "name": "BaseBdev3", 00:10:23.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.727 "is_configured": false, 00:10:23.727 "data_offset": 0, 00:10:23.727 "data_size": 0 00:10:23.727 } 00:10:23.727 ] 00:10:23.727 }' 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.727 08:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.987 [2024-12-13 08:21:36.292504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.987 [2024-12-13 08:21:36.292578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.987 [2024-12-13 08:21:36.304528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.987 [2024-12-13 08:21:36.306501] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.987 [2024-12-13 08:21:36.306544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.987 [2024-12-13 08:21:36.306554] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.987 [2024-12-13 08:21:36.306563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.987 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.246 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.246 "name": "Existed_Raid", 00:10:24.246 "uuid": "b2fa6d1d-ab2f-4f29-8e2c-e6c349d281c7", 00:10:24.246 "strip_size_kb": 0, 00:10:24.246 "state": "configuring", 00:10:24.246 "raid_level": "raid1", 00:10:24.246 "superblock": true, 00:10:24.246 "num_base_bdevs": 3, 00:10:24.246 "num_base_bdevs_discovered": 1, 00:10:24.246 "num_base_bdevs_operational": 3, 00:10:24.246 "base_bdevs_list": [ 00:10:24.246 { 00:10:24.246 "name": "BaseBdev1", 00:10:24.246 "uuid": "aa20834e-36d7-409b-b16a-1b256c05fa3a", 00:10:24.246 "is_configured": true, 00:10:24.246 "data_offset": 2048, 00:10:24.246 "data_size": 63488 00:10:24.246 }, 00:10:24.246 { 00:10:24.246 "name": "BaseBdev2", 00:10:24.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.246 "is_configured": false, 00:10:24.246 "data_offset": 0, 00:10:24.246 "data_size": 0 00:10:24.246 }, 00:10:24.246 { 00:10:24.246 "name": "BaseBdev3", 00:10:24.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.246 "is_configured": false, 00:10:24.246 "data_offset": 0, 00:10:24.246 "data_size": 0 00:10:24.246 } 00:10:24.247 ] 00:10:24.247 }' 00:10:24.247 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.247 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 [2024-12-13 08:21:36.772425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.507 BaseBdev2 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 [ 00:10:24.507 { 00:10:24.507 "name": "BaseBdev2", 00:10:24.507 "aliases": [ 00:10:24.507 "ce4d2c6e-e587-4107-b46d-aa5a5a98ace4" 00:10:24.507 ], 00:10:24.507 "product_name": "Malloc disk", 00:10:24.507 "block_size": 512, 00:10:24.507 "num_blocks": 65536, 00:10:24.507 "uuid": "ce4d2c6e-e587-4107-b46d-aa5a5a98ace4", 00:10:24.507 "assigned_rate_limits": { 00:10:24.507 "rw_ios_per_sec": 0, 00:10:24.507 "rw_mbytes_per_sec": 0, 00:10:24.507 "r_mbytes_per_sec": 0, 00:10:24.507 "w_mbytes_per_sec": 0 00:10:24.507 }, 00:10:24.507 "claimed": true, 00:10:24.507 "claim_type": "exclusive_write", 00:10:24.507 "zoned": false, 00:10:24.507 "supported_io_types": { 00:10:24.507 "read": true, 00:10:24.507 "write": true, 00:10:24.507 "unmap": true, 00:10:24.507 "flush": true, 00:10:24.507 "reset": true, 00:10:24.507 "nvme_admin": false, 00:10:24.507 "nvme_io": false, 00:10:24.507 "nvme_io_md": false, 00:10:24.507 "write_zeroes": true, 00:10:24.507 "zcopy": true, 00:10:24.507 "get_zone_info": false, 00:10:24.507 "zone_management": false, 00:10:24.507 "zone_append": false, 00:10:24.507 "compare": false, 00:10:24.507 "compare_and_write": false, 00:10:24.507 "abort": true, 00:10:24.507 "seek_hole": false, 00:10:24.507 "seek_data": false, 00:10:24.507 "copy": true, 00:10:24.507 "nvme_iov_md": false 00:10:24.507 }, 00:10:24.507 "memory_domains": [ 00:10:24.507 { 00:10:24.507 "dma_device_id": "system", 00:10:24.507 "dma_device_type": 1 00:10:24.507 }, 00:10:24.507 { 00:10:24.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.507 "dma_device_type": 2 00:10:24.507 } 00:10:24.507 ], 00:10:24.507 "driver_specific": {} 00:10:24.507 } 00:10:24.507 ] 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.507 "name": "Existed_Raid", 00:10:24.507 "uuid": "b2fa6d1d-ab2f-4f29-8e2c-e6c349d281c7", 00:10:24.507 "strip_size_kb": 0, 00:10:24.507 "state": "configuring", 00:10:24.507 "raid_level": "raid1", 00:10:24.507 "superblock": true, 00:10:24.507 "num_base_bdevs": 3, 00:10:24.507 "num_base_bdevs_discovered": 2, 00:10:24.507 "num_base_bdevs_operational": 3, 00:10:24.507 "base_bdevs_list": [ 00:10:24.507 { 00:10:24.507 "name": "BaseBdev1", 00:10:24.507 "uuid": "aa20834e-36d7-409b-b16a-1b256c05fa3a", 00:10:24.507 "is_configured": true, 00:10:24.507 "data_offset": 2048, 00:10:24.507 "data_size": 63488 00:10:24.507 }, 00:10:24.507 { 00:10:24.507 "name": "BaseBdev2", 00:10:24.507 "uuid": "ce4d2c6e-e587-4107-b46d-aa5a5a98ace4", 00:10:24.507 "is_configured": true, 00:10:24.507 "data_offset": 2048, 00:10:24.507 "data_size": 63488 00:10:24.507 }, 00:10:24.507 { 00:10:24.507 "name": "BaseBdev3", 00:10:24.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.507 "is_configured": false, 00:10:24.507 "data_offset": 0, 00:10:24.507 "data_size": 0 00:10:24.507 } 00:10:24.507 ] 00:10:24.507 }' 00:10:24.507 08:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.508 08:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.077 [2024-12-13 08:21:37.260437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.077 [2024-12-13 08:21:37.260753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.077 [2024-12-13 08:21:37.260781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.077 [2024-12-13 08:21:37.261091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:25.077 [2024-12-13 08:21:37.261302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.077 [2024-12-13 08:21:37.261318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:25.077 BaseBdev3 00:10:25.077 [2024-12-13 08:21:37.261499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.077 [ 00:10:25.077 { 00:10:25.077 "name": "BaseBdev3", 00:10:25.077 "aliases": [ 00:10:25.077 "4978c066-027d-4a50-ae24-f21a57f86483" 00:10:25.077 ], 00:10:25.077 "product_name": "Malloc disk", 00:10:25.077 "block_size": 512, 00:10:25.077 "num_blocks": 65536, 00:10:25.077 "uuid": "4978c066-027d-4a50-ae24-f21a57f86483", 00:10:25.077 "assigned_rate_limits": { 00:10:25.077 "rw_ios_per_sec": 0, 00:10:25.077 "rw_mbytes_per_sec": 0, 00:10:25.077 "r_mbytes_per_sec": 0, 00:10:25.077 "w_mbytes_per_sec": 0 00:10:25.077 }, 00:10:25.077 "claimed": true, 00:10:25.077 "claim_type": "exclusive_write", 00:10:25.077 "zoned": false, 00:10:25.077 "supported_io_types": { 00:10:25.077 "read": true, 00:10:25.077 "write": true, 00:10:25.077 "unmap": true, 00:10:25.077 "flush": true, 00:10:25.077 "reset": true, 00:10:25.077 "nvme_admin": false, 00:10:25.077 "nvme_io": false, 00:10:25.077 "nvme_io_md": false, 00:10:25.077 "write_zeroes": true, 00:10:25.077 "zcopy": true, 00:10:25.077 "get_zone_info": false, 00:10:25.077 "zone_management": false, 00:10:25.077 "zone_append": false, 00:10:25.077 "compare": false, 00:10:25.077 "compare_and_write": false, 00:10:25.077 "abort": true, 00:10:25.077 "seek_hole": false, 00:10:25.077 "seek_data": false, 00:10:25.077 "copy": true, 00:10:25.077 "nvme_iov_md": false 00:10:25.077 }, 00:10:25.077 "memory_domains": [ 00:10:25.077 { 00:10:25.077 "dma_device_id": "system", 00:10:25.077 "dma_device_type": 1 00:10:25.077 }, 00:10:25.077 { 00:10:25.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.077 "dma_device_type": 2 00:10:25.077 } 00:10:25.077 ], 00:10:25.077 "driver_specific": {} 00:10:25.077 } 00:10:25.077 ] 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.077 "name": "Existed_Raid", 00:10:25.077 "uuid": "b2fa6d1d-ab2f-4f29-8e2c-e6c349d281c7", 00:10:25.077 "strip_size_kb": 0, 00:10:25.077 "state": "online", 00:10:25.077 "raid_level": "raid1", 00:10:25.077 "superblock": true, 00:10:25.077 "num_base_bdevs": 3, 00:10:25.077 "num_base_bdevs_discovered": 3, 00:10:25.077 "num_base_bdevs_operational": 3, 00:10:25.077 "base_bdevs_list": [ 00:10:25.077 { 00:10:25.077 "name": "BaseBdev1", 00:10:25.077 "uuid": "aa20834e-36d7-409b-b16a-1b256c05fa3a", 00:10:25.077 "is_configured": true, 00:10:25.077 "data_offset": 2048, 00:10:25.077 "data_size": 63488 00:10:25.077 }, 00:10:25.077 { 00:10:25.077 "name": "BaseBdev2", 00:10:25.077 "uuid": "ce4d2c6e-e587-4107-b46d-aa5a5a98ace4", 00:10:25.077 "is_configured": true, 00:10:25.077 "data_offset": 2048, 00:10:25.077 "data_size": 63488 00:10:25.077 }, 00:10:25.077 { 00:10:25.077 "name": "BaseBdev3", 00:10:25.077 "uuid": "4978c066-027d-4a50-ae24-f21a57f86483", 00:10:25.077 "is_configured": true, 00:10:25.077 "data_offset": 2048, 00:10:25.077 "data_size": 63488 00:10:25.077 } 00:10:25.077 ] 00:10:25.077 }' 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.077 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.646 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.646 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.646 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.646 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.646 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.646 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.646 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.647 [2024-12-13 08:21:37.771938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.647 "name": "Existed_Raid", 00:10:25.647 "aliases": [ 00:10:25.647 "b2fa6d1d-ab2f-4f29-8e2c-e6c349d281c7" 00:10:25.647 ], 00:10:25.647 "product_name": "Raid Volume", 00:10:25.647 "block_size": 512, 00:10:25.647 "num_blocks": 63488, 00:10:25.647 "uuid": "b2fa6d1d-ab2f-4f29-8e2c-e6c349d281c7", 00:10:25.647 "assigned_rate_limits": { 00:10:25.647 "rw_ios_per_sec": 0, 00:10:25.647 "rw_mbytes_per_sec": 0, 00:10:25.647 "r_mbytes_per_sec": 0, 00:10:25.647 "w_mbytes_per_sec": 0 00:10:25.647 }, 00:10:25.647 "claimed": false, 00:10:25.647 "zoned": false, 00:10:25.647 "supported_io_types": { 00:10:25.647 "read": true, 00:10:25.647 "write": true, 00:10:25.647 "unmap": false, 00:10:25.647 "flush": false, 00:10:25.647 "reset": true, 00:10:25.647 "nvme_admin": false, 00:10:25.647 "nvme_io": false, 00:10:25.647 "nvme_io_md": false, 00:10:25.647 "write_zeroes": true, 00:10:25.647 "zcopy": false, 00:10:25.647 "get_zone_info": false, 00:10:25.647 "zone_management": false, 00:10:25.647 "zone_append": false, 00:10:25.647 "compare": false, 00:10:25.647 "compare_and_write": false, 00:10:25.647 "abort": false, 00:10:25.647 "seek_hole": false, 00:10:25.647 "seek_data": false, 00:10:25.647 "copy": false, 00:10:25.647 "nvme_iov_md": false 00:10:25.647 }, 00:10:25.647 "memory_domains": [ 00:10:25.647 { 00:10:25.647 "dma_device_id": "system", 00:10:25.647 "dma_device_type": 1 00:10:25.647 }, 00:10:25.647 { 00:10:25.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.647 "dma_device_type": 2 00:10:25.647 }, 00:10:25.647 { 00:10:25.647 "dma_device_id": "system", 00:10:25.647 "dma_device_type": 1 00:10:25.647 }, 00:10:25.647 { 00:10:25.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.647 "dma_device_type": 2 00:10:25.647 }, 00:10:25.647 { 00:10:25.647 "dma_device_id": "system", 00:10:25.647 "dma_device_type": 1 00:10:25.647 }, 00:10:25.647 { 00:10:25.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.647 "dma_device_type": 2 00:10:25.647 } 00:10:25.647 ], 00:10:25.647 "driver_specific": { 00:10:25.647 "raid": { 00:10:25.647 "uuid": "b2fa6d1d-ab2f-4f29-8e2c-e6c349d281c7", 00:10:25.647 "strip_size_kb": 0, 00:10:25.647 "state": "online", 00:10:25.647 "raid_level": "raid1", 00:10:25.647 "superblock": true, 00:10:25.647 "num_base_bdevs": 3, 00:10:25.647 "num_base_bdevs_discovered": 3, 00:10:25.647 "num_base_bdevs_operational": 3, 00:10:25.647 "base_bdevs_list": [ 00:10:25.647 { 00:10:25.647 "name": "BaseBdev1", 00:10:25.647 "uuid": "aa20834e-36d7-409b-b16a-1b256c05fa3a", 00:10:25.647 "is_configured": true, 00:10:25.647 "data_offset": 2048, 00:10:25.647 "data_size": 63488 00:10:25.647 }, 00:10:25.647 { 00:10:25.647 "name": "BaseBdev2", 00:10:25.647 "uuid": "ce4d2c6e-e587-4107-b46d-aa5a5a98ace4", 00:10:25.647 "is_configured": true, 00:10:25.647 "data_offset": 2048, 00:10:25.647 "data_size": 63488 00:10:25.647 }, 00:10:25.647 { 00:10:25.647 "name": "BaseBdev3", 00:10:25.647 "uuid": "4978c066-027d-4a50-ae24-f21a57f86483", 00:10:25.647 "is_configured": true, 00:10:25.647 "data_offset": 2048, 00:10:25.647 "data_size": 63488 00:10:25.647 } 00:10:25.647 ] 00:10:25.647 } 00:10:25.647 } 00:10:25.647 }' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.647 BaseBdev2 00:10:25.647 BaseBdev3' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.647 08:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.647 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.647 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.647 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.647 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.647 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.647 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.647 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.907 [2024-12-13 08:21:38.079188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.907 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.908 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.908 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.908 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.908 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.908 "name": "Existed_Raid", 00:10:25.908 "uuid": "b2fa6d1d-ab2f-4f29-8e2c-e6c349d281c7", 00:10:25.908 "strip_size_kb": 0, 00:10:25.908 "state": "online", 00:10:25.908 "raid_level": "raid1", 00:10:25.908 "superblock": true, 00:10:25.908 "num_base_bdevs": 3, 00:10:25.908 "num_base_bdevs_discovered": 2, 00:10:25.908 "num_base_bdevs_operational": 2, 00:10:25.908 "base_bdevs_list": [ 00:10:25.908 { 00:10:25.908 "name": null, 00:10:25.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.908 "is_configured": false, 00:10:25.908 "data_offset": 0, 00:10:25.908 "data_size": 63488 00:10:25.908 }, 00:10:25.908 { 00:10:25.908 "name": "BaseBdev2", 00:10:25.908 "uuid": "ce4d2c6e-e587-4107-b46d-aa5a5a98ace4", 00:10:25.908 "is_configured": true, 00:10:25.908 "data_offset": 2048, 00:10:25.908 "data_size": 63488 00:10:25.908 }, 00:10:25.908 { 00:10:25.908 "name": "BaseBdev3", 00:10:25.908 "uuid": "4978c066-027d-4a50-ae24-f21a57f86483", 00:10:25.908 "is_configured": true, 00:10:25.908 "data_offset": 2048, 00:10:25.908 "data_size": 63488 00:10:25.908 } 00:10:25.908 ] 00:10:25.908 }' 00:10:25.908 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.908 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:26.477 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.478 [2024-12-13 08:21:38.700714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.478 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.757 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.757 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.757 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:26.757 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.757 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.757 [2024-12-13 08:21:38.856112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.757 [2024-12-13 08:21:38.856267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.757 [2024-12-13 08:21:38.956949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.758 [2024-12-13 08:21:38.957013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.758 [2024-12-13 08:21:38.957043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.758 08:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.758 BaseBdev2 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.758 [ 00:10:26.758 { 00:10:26.758 "name": "BaseBdev2", 00:10:26.758 "aliases": [ 00:10:26.758 "2491e9c3-35db-4018-81cf-b5eddb65cb6d" 00:10:26.758 ], 00:10:26.758 "product_name": "Malloc disk", 00:10:26.758 "block_size": 512, 00:10:26.758 "num_blocks": 65536, 00:10:26.758 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:26.758 "assigned_rate_limits": { 00:10:26.758 "rw_ios_per_sec": 0, 00:10:26.758 "rw_mbytes_per_sec": 0, 00:10:26.758 "r_mbytes_per_sec": 0, 00:10:26.758 "w_mbytes_per_sec": 0 00:10:26.758 }, 00:10:26.758 "claimed": false, 00:10:26.758 "zoned": false, 00:10:26.758 "supported_io_types": { 00:10:26.758 "read": true, 00:10:26.758 "write": true, 00:10:26.758 "unmap": true, 00:10:26.758 "flush": true, 00:10:26.758 "reset": true, 00:10:26.758 "nvme_admin": false, 00:10:26.758 "nvme_io": false, 00:10:26.758 "nvme_io_md": false, 00:10:26.758 "write_zeroes": true, 00:10:26.758 "zcopy": true, 00:10:26.758 "get_zone_info": false, 00:10:26.758 "zone_management": false, 00:10:26.758 "zone_append": false, 00:10:26.758 "compare": false, 00:10:26.758 "compare_and_write": false, 00:10:26.758 "abort": true, 00:10:26.758 "seek_hole": false, 00:10:26.758 "seek_data": false, 00:10:26.758 "copy": true, 00:10:26.758 "nvme_iov_md": false 00:10:26.758 }, 00:10:26.758 "memory_domains": [ 00:10:26.758 { 00:10:26.758 "dma_device_id": "system", 00:10:26.758 "dma_device_type": 1 00:10:26.758 }, 00:10:26.758 { 00:10:26.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.758 "dma_device_type": 2 00:10:26.758 } 00:10:26.758 ], 00:10:26.758 "driver_specific": {} 00:10:26.758 } 00:10:26.758 ] 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.758 BaseBdev3 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.758 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.017 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.017 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.017 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.017 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.017 [ 00:10:27.017 { 00:10:27.017 "name": "BaseBdev3", 00:10:27.017 "aliases": [ 00:10:27.017 "8fc2a325-9feb-455d-a687-065176bef687" 00:10:27.017 ], 00:10:27.017 "product_name": "Malloc disk", 00:10:27.017 "block_size": 512, 00:10:27.017 "num_blocks": 65536, 00:10:27.017 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:27.017 "assigned_rate_limits": { 00:10:27.017 "rw_ios_per_sec": 0, 00:10:27.017 "rw_mbytes_per_sec": 0, 00:10:27.017 "r_mbytes_per_sec": 0, 00:10:27.017 "w_mbytes_per_sec": 0 00:10:27.017 }, 00:10:27.017 "claimed": false, 00:10:27.017 "zoned": false, 00:10:27.017 "supported_io_types": { 00:10:27.017 "read": true, 00:10:27.017 "write": true, 00:10:27.017 "unmap": true, 00:10:27.017 "flush": true, 00:10:27.017 "reset": true, 00:10:27.017 "nvme_admin": false, 00:10:27.017 "nvme_io": false, 00:10:27.017 "nvme_io_md": false, 00:10:27.017 "write_zeroes": true, 00:10:27.017 "zcopy": true, 00:10:27.017 "get_zone_info": false, 00:10:27.017 "zone_management": false, 00:10:27.017 "zone_append": false, 00:10:27.017 "compare": false, 00:10:27.017 "compare_and_write": false, 00:10:27.017 "abort": true, 00:10:27.017 "seek_hole": false, 00:10:27.017 "seek_data": false, 00:10:27.017 "copy": true, 00:10:27.017 "nvme_iov_md": false 00:10:27.017 }, 00:10:27.017 "memory_domains": [ 00:10:27.017 { 00:10:27.017 "dma_device_id": "system", 00:10:27.017 "dma_device_type": 1 00:10:27.017 }, 00:10:27.017 { 00:10:27.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.017 "dma_device_type": 2 00:10:27.017 } 00:10:27.017 ], 00:10:27.017 "driver_specific": {} 00:10:27.017 } 00:10:27.017 ] 00:10:27.017 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.017 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.017 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.018 [2024-12-13 08:21:39.152999] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.018 [2024-12-13 08:21:39.153053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.018 [2024-12-13 08:21:39.153078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.018 [2024-12-13 08:21:39.155020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.018 "name": "Existed_Raid", 00:10:27.018 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:27.018 "strip_size_kb": 0, 00:10:27.018 "state": "configuring", 00:10:27.018 "raid_level": "raid1", 00:10:27.018 "superblock": true, 00:10:27.018 "num_base_bdevs": 3, 00:10:27.018 "num_base_bdevs_discovered": 2, 00:10:27.018 "num_base_bdevs_operational": 3, 00:10:27.018 "base_bdevs_list": [ 00:10:27.018 { 00:10:27.018 "name": "BaseBdev1", 00:10:27.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.018 "is_configured": false, 00:10:27.018 "data_offset": 0, 00:10:27.018 "data_size": 0 00:10:27.018 }, 00:10:27.018 { 00:10:27.018 "name": "BaseBdev2", 00:10:27.018 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:27.018 "is_configured": true, 00:10:27.018 "data_offset": 2048, 00:10:27.018 "data_size": 63488 00:10:27.018 }, 00:10:27.018 { 00:10:27.018 "name": "BaseBdev3", 00:10:27.018 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:27.018 "is_configured": true, 00:10:27.018 "data_offset": 2048, 00:10:27.018 "data_size": 63488 00:10:27.018 } 00:10:27.018 ] 00:10:27.018 }' 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.018 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.276 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:27.276 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.276 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.276 [2024-12-13 08:21:39.580286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.276 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.276 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.276 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.277 "name": "Existed_Raid", 00:10:27.277 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:27.277 "strip_size_kb": 0, 00:10:27.277 "state": "configuring", 00:10:27.277 "raid_level": "raid1", 00:10:27.277 "superblock": true, 00:10:27.277 "num_base_bdevs": 3, 00:10:27.277 "num_base_bdevs_discovered": 1, 00:10:27.277 "num_base_bdevs_operational": 3, 00:10:27.277 "base_bdevs_list": [ 00:10:27.277 { 00:10:27.277 "name": "BaseBdev1", 00:10:27.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.277 "is_configured": false, 00:10:27.277 "data_offset": 0, 00:10:27.277 "data_size": 0 00:10:27.277 }, 00:10:27.277 { 00:10:27.277 "name": null, 00:10:27.277 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:27.277 "is_configured": false, 00:10:27.277 "data_offset": 0, 00:10:27.277 "data_size": 63488 00:10:27.277 }, 00:10:27.277 { 00:10:27.277 "name": "BaseBdev3", 00:10:27.277 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:27.277 "is_configured": true, 00:10:27.277 "data_offset": 2048, 00:10:27.277 "data_size": 63488 00:10:27.277 } 00:10:27.277 ] 00:10:27.277 }' 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.277 08:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.846 [2024-12-13 08:21:40.118241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.846 BaseBdev1 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.846 [ 00:10:27.846 { 00:10:27.846 "name": "BaseBdev1", 00:10:27.846 "aliases": [ 00:10:27.846 "06d099df-5cfe-4a22-bae4-f7824d1d2fba" 00:10:27.846 ], 00:10:27.846 "product_name": "Malloc disk", 00:10:27.846 "block_size": 512, 00:10:27.846 "num_blocks": 65536, 00:10:27.846 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:27.846 "assigned_rate_limits": { 00:10:27.846 "rw_ios_per_sec": 0, 00:10:27.846 "rw_mbytes_per_sec": 0, 00:10:27.846 "r_mbytes_per_sec": 0, 00:10:27.846 "w_mbytes_per_sec": 0 00:10:27.846 }, 00:10:27.846 "claimed": true, 00:10:27.846 "claim_type": "exclusive_write", 00:10:27.846 "zoned": false, 00:10:27.846 "supported_io_types": { 00:10:27.846 "read": true, 00:10:27.846 "write": true, 00:10:27.846 "unmap": true, 00:10:27.846 "flush": true, 00:10:27.846 "reset": true, 00:10:27.846 "nvme_admin": false, 00:10:27.846 "nvme_io": false, 00:10:27.846 "nvme_io_md": false, 00:10:27.846 "write_zeroes": true, 00:10:27.846 "zcopy": true, 00:10:27.846 "get_zone_info": false, 00:10:27.846 "zone_management": false, 00:10:27.846 "zone_append": false, 00:10:27.846 "compare": false, 00:10:27.846 "compare_and_write": false, 00:10:27.846 "abort": true, 00:10:27.846 "seek_hole": false, 00:10:27.846 "seek_data": false, 00:10:27.846 "copy": true, 00:10:27.846 "nvme_iov_md": false 00:10:27.846 }, 00:10:27.846 "memory_domains": [ 00:10:27.846 { 00:10:27.846 "dma_device_id": "system", 00:10:27.846 "dma_device_type": 1 00:10:27.846 }, 00:10:27.846 { 00:10:27.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.846 "dma_device_type": 2 00:10:27.846 } 00:10:27.846 ], 00:10:27.846 "driver_specific": {} 00:10:27.846 } 00:10:27.846 ] 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.846 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.105 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.105 "name": "Existed_Raid", 00:10:28.105 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:28.105 "strip_size_kb": 0, 00:10:28.105 "state": "configuring", 00:10:28.105 "raid_level": "raid1", 00:10:28.105 "superblock": true, 00:10:28.105 "num_base_bdevs": 3, 00:10:28.105 "num_base_bdevs_discovered": 2, 00:10:28.105 "num_base_bdevs_operational": 3, 00:10:28.105 "base_bdevs_list": [ 00:10:28.105 { 00:10:28.105 "name": "BaseBdev1", 00:10:28.105 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:28.105 "is_configured": true, 00:10:28.105 "data_offset": 2048, 00:10:28.105 "data_size": 63488 00:10:28.105 }, 00:10:28.105 { 00:10:28.105 "name": null, 00:10:28.105 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:28.105 "is_configured": false, 00:10:28.105 "data_offset": 0, 00:10:28.105 "data_size": 63488 00:10:28.105 }, 00:10:28.105 { 00:10:28.105 "name": "BaseBdev3", 00:10:28.105 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:28.105 "is_configured": true, 00:10:28.105 "data_offset": 2048, 00:10:28.105 "data_size": 63488 00:10:28.105 } 00:10:28.105 ] 00:10:28.105 }' 00:10:28.105 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.105 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.364 [2024-12-13 08:21:40.633421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.364 "name": "Existed_Raid", 00:10:28.364 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:28.364 "strip_size_kb": 0, 00:10:28.364 "state": "configuring", 00:10:28.364 "raid_level": "raid1", 00:10:28.364 "superblock": true, 00:10:28.364 "num_base_bdevs": 3, 00:10:28.364 "num_base_bdevs_discovered": 1, 00:10:28.364 "num_base_bdevs_operational": 3, 00:10:28.364 "base_bdevs_list": [ 00:10:28.364 { 00:10:28.364 "name": "BaseBdev1", 00:10:28.364 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:28.364 "is_configured": true, 00:10:28.364 "data_offset": 2048, 00:10:28.364 "data_size": 63488 00:10:28.364 }, 00:10:28.364 { 00:10:28.364 "name": null, 00:10:28.364 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:28.364 "is_configured": false, 00:10:28.364 "data_offset": 0, 00:10:28.364 "data_size": 63488 00:10:28.364 }, 00:10:28.364 { 00:10:28.364 "name": null, 00:10:28.364 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:28.364 "is_configured": false, 00:10:28.364 "data_offset": 0, 00:10:28.364 "data_size": 63488 00:10:28.364 } 00:10:28.364 ] 00:10:28.364 }' 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.364 08:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 [2024-12-13 08:21:41.088669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.930 "name": "Existed_Raid", 00:10:28.930 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:28.930 "strip_size_kb": 0, 00:10:28.930 "state": "configuring", 00:10:28.930 "raid_level": "raid1", 00:10:28.930 "superblock": true, 00:10:28.930 "num_base_bdevs": 3, 00:10:28.930 "num_base_bdevs_discovered": 2, 00:10:28.930 "num_base_bdevs_operational": 3, 00:10:28.930 "base_bdevs_list": [ 00:10:28.930 { 00:10:28.930 "name": "BaseBdev1", 00:10:28.930 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:28.930 "is_configured": true, 00:10:28.930 "data_offset": 2048, 00:10:28.930 "data_size": 63488 00:10:28.930 }, 00:10:28.930 { 00:10:28.930 "name": null, 00:10:28.930 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:28.930 "is_configured": false, 00:10:28.930 "data_offset": 0, 00:10:28.930 "data_size": 63488 00:10:28.930 }, 00:10:28.930 { 00:10:28.930 "name": "BaseBdev3", 00:10:28.930 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:28.930 "is_configured": true, 00:10:28.930 "data_offset": 2048, 00:10:28.930 "data_size": 63488 00:10:28.930 } 00:10:28.930 ] 00:10:28.930 }' 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.930 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.188 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.188 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.188 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.188 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.188 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.447 [2024-12-13 08:21:41.575902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.447 "name": "Existed_Raid", 00:10:29.447 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:29.447 "strip_size_kb": 0, 00:10:29.447 "state": "configuring", 00:10:29.447 "raid_level": "raid1", 00:10:29.447 "superblock": true, 00:10:29.447 "num_base_bdevs": 3, 00:10:29.447 "num_base_bdevs_discovered": 1, 00:10:29.447 "num_base_bdevs_operational": 3, 00:10:29.447 "base_bdevs_list": [ 00:10:29.447 { 00:10:29.447 "name": null, 00:10:29.447 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:29.447 "is_configured": false, 00:10:29.447 "data_offset": 0, 00:10:29.447 "data_size": 63488 00:10:29.447 }, 00:10:29.447 { 00:10:29.447 "name": null, 00:10:29.447 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:29.447 "is_configured": false, 00:10:29.447 "data_offset": 0, 00:10:29.447 "data_size": 63488 00:10:29.447 }, 00:10:29.447 { 00:10:29.447 "name": "BaseBdev3", 00:10:29.447 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:29.447 "is_configured": true, 00:10:29.447 "data_offset": 2048, 00:10:29.447 "data_size": 63488 00:10:29.447 } 00:10:29.447 ] 00:10:29.447 }' 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.447 08:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.015 [2024-12-13 08:21:42.229358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.015 "name": "Existed_Raid", 00:10:30.015 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:30.015 "strip_size_kb": 0, 00:10:30.015 "state": "configuring", 00:10:30.015 "raid_level": "raid1", 00:10:30.015 "superblock": true, 00:10:30.015 "num_base_bdevs": 3, 00:10:30.015 "num_base_bdevs_discovered": 2, 00:10:30.015 "num_base_bdevs_operational": 3, 00:10:30.015 "base_bdevs_list": [ 00:10:30.015 { 00:10:30.015 "name": null, 00:10:30.015 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:30.015 "is_configured": false, 00:10:30.015 "data_offset": 0, 00:10:30.015 "data_size": 63488 00:10:30.015 }, 00:10:30.015 { 00:10:30.015 "name": "BaseBdev2", 00:10:30.015 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:30.015 "is_configured": true, 00:10:30.015 "data_offset": 2048, 00:10:30.015 "data_size": 63488 00:10:30.015 }, 00:10:30.015 { 00:10:30.015 "name": "BaseBdev3", 00:10:30.015 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:30.015 "is_configured": true, 00:10:30.015 "data_offset": 2048, 00:10:30.015 "data_size": 63488 00:10:30.015 } 00:10:30.015 ] 00:10:30.015 }' 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.015 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 06d099df-5cfe-4a22-bae4-f7824d1d2fba 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.581 [2024-12-13 08:21:42.785866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:30.581 [2024-12-13 08:21:42.786197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.581 [2024-12-13 08:21:42.786252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:30.581 [2024-12-13 08:21:42.786565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:30.581 [2024-12-13 08:21:42.786772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.581 NewBaseBdev 00:10:30.581 [2024-12-13 08:21:42.786818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:30.581 [2024-12-13 08:21:42.786988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.581 [ 00:10:30.581 { 00:10:30.581 "name": "NewBaseBdev", 00:10:30.581 "aliases": [ 00:10:30.581 "06d099df-5cfe-4a22-bae4-f7824d1d2fba" 00:10:30.581 ], 00:10:30.581 "product_name": "Malloc disk", 00:10:30.581 "block_size": 512, 00:10:30.581 "num_blocks": 65536, 00:10:30.581 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:30.581 "assigned_rate_limits": { 00:10:30.581 "rw_ios_per_sec": 0, 00:10:30.581 "rw_mbytes_per_sec": 0, 00:10:30.581 "r_mbytes_per_sec": 0, 00:10:30.581 "w_mbytes_per_sec": 0 00:10:30.581 }, 00:10:30.581 "claimed": true, 00:10:30.581 "claim_type": "exclusive_write", 00:10:30.581 "zoned": false, 00:10:30.581 "supported_io_types": { 00:10:30.581 "read": true, 00:10:30.581 "write": true, 00:10:30.581 "unmap": true, 00:10:30.581 "flush": true, 00:10:30.581 "reset": true, 00:10:30.581 "nvme_admin": false, 00:10:30.581 "nvme_io": false, 00:10:30.581 "nvme_io_md": false, 00:10:30.581 "write_zeroes": true, 00:10:30.581 "zcopy": true, 00:10:30.581 "get_zone_info": false, 00:10:30.581 "zone_management": false, 00:10:30.581 "zone_append": false, 00:10:30.581 "compare": false, 00:10:30.581 "compare_and_write": false, 00:10:30.581 "abort": true, 00:10:30.581 "seek_hole": false, 00:10:30.581 "seek_data": false, 00:10:30.581 "copy": true, 00:10:30.581 "nvme_iov_md": false 00:10:30.581 }, 00:10:30.581 "memory_domains": [ 00:10:30.581 { 00:10:30.581 "dma_device_id": "system", 00:10:30.581 "dma_device_type": 1 00:10:30.581 }, 00:10:30.581 { 00:10:30.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.581 "dma_device_type": 2 00:10:30.581 } 00:10:30.581 ], 00:10:30.581 "driver_specific": {} 00:10:30.581 } 00:10:30.581 ] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.581 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.582 "name": "Existed_Raid", 00:10:30.582 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:30.582 "strip_size_kb": 0, 00:10:30.582 "state": "online", 00:10:30.582 "raid_level": "raid1", 00:10:30.582 "superblock": true, 00:10:30.582 "num_base_bdevs": 3, 00:10:30.582 "num_base_bdevs_discovered": 3, 00:10:30.582 "num_base_bdevs_operational": 3, 00:10:30.582 "base_bdevs_list": [ 00:10:30.582 { 00:10:30.582 "name": "NewBaseBdev", 00:10:30.582 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:30.582 "is_configured": true, 00:10:30.582 "data_offset": 2048, 00:10:30.582 "data_size": 63488 00:10:30.582 }, 00:10:30.582 { 00:10:30.582 "name": "BaseBdev2", 00:10:30.582 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:30.582 "is_configured": true, 00:10:30.582 "data_offset": 2048, 00:10:30.582 "data_size": 63488 00:10:30.582 }, 00:10:30.582 { 00:10:30.582 "name": "BaseBdev3", 00:10:30.582 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:30.582 "is_configured": true, 00:10:30.582 "data_offset": 2048, 00:10:30.582 "data_size": 63488 00:10:30.582 } 00:10:30.582 ] 00:10:30.582 }' 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.582 08:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.161 [2024-12-13 08:21:43.297402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.161 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.161 "name": "Existed_Raid", 00:10:31.161 "aliases": [ 00:10:31.161 "eb3419cd-db53-48de-9b73-77a0e21af018" 00:10:31.161 ], 00:10:31.161 "product_name": "Raid Volume", 00:10:31.161 "block_size": 512, 00:10:31.161 "num_blocks": 63488, 00:10:31.161 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:31.161 "assigned_rate_limits": { 00:10:31.161 "rw_ios_per_sec": 0, 00:10:31.161 "rw_mbytes_per_sec": 0, 00:10:31.161 "r_mbytes_per_sec": 0, 00:10:31.161 "w_mbytes_per_sec": 0 00:10:31.161 }, 00:10:31.161 "claimed": false, 00:10:31.161 "zoned": false, 00:10:31.161 "supported_io_types": { 00:10:31.161 "read": true, 00:10:31.161 "write": true, 00:10:31.161 "unmap": false, 00:10:31.161 "flush": false, 00:10:31.161 "reset": true, 00:10:31.161 "nvme_admin": false, 00:10:31.161 "nvme_io": false, 00:10:31.161 "nvme_io_md": false, 00:10:31.161 "write_zeroes": true, 00:10:31.161 "zcopy": false, 00:10:31.161 "get_zone_info": false, 00:10:31.161 "zone_management": false, 00:10:31.161 "zone_append": false, 00:10:31.161 "compare": false, 00:10:31.161 "compare_and_write": false, 00:10:31.161 "abort": false, 00:10:31.161 "seek_hole": false, 00:10:31.161 "seek_data": false, 00:10:31.161 "copy": false, 00:10:31.161 "nvme_iov_md": false 00:10:31.161 }, 00:10:31.161 "memory_domains": [ 00:10:31.161 { 00:10:31.161 "dma_device_id": "system", 00:10:31.161 "dma_device_type": 1 00:10:31.161 }, 00:10:31.161 { 00:10:31.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.161 "dma_device_type": 2 00:10:31.161 }, 00:10:31.161 { 00:10:31.161 "dma_device_id": "system", 00:10:31.161 "dma_device_type": 1 00:10:31.161 }, 00:10:31.162 { 00:10:31.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.162 "dma_device_type": 2 00:10:31.162 }, 00:10:31.162 { 00:10:31.162 "dma_device_id": "system", 00:10:31.162 "dma_device_type": 1 00:10:31.162 }, 00:10:31.162 { 00:10:31.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.162 "dma_device_type": 2 00:10:31.162 } 00:10:31.162 ], 00:10:31.162 "driver_specific": { 00:10:31.162 "raid": { 00:10:31.162 "uuid": "eb3419cd-db53-48de-9b73-77a0e21af018", 00:10:31.162 "strip_size_kb": 0, 00:10:31.162 "state": "online", 00:10:31.162 "raid_level": "raid1", 00:10:31.162 "superblock": true, 00:10:31.162 "num_base_bdevs": 3, 00:10:31.162 "num_base_bdevs_discovered": 3, 00:10:31.162 "num_base_bdevs_operational": 3, 00:10:31.162 "base_bdevs_list": [ 00:10:31.162 { 00:10:31.162 "name": "NewBaseBdev", 00:10:31.162 "uuid": "06d099df-5cfe-4a22-bae4-f7824d1d2fba", 00:10:31.162 "is_configured": true, 00:10:31.162 "data_offset": 2048, 00:10:31.162 "data_size": 63488 00:10:31.162 }, 00:10:31.162 { 00:10:31.162 "name": "BaseBdev2", 00:10:31.162 "uuid": "2491e9c3-35db-4018-81cf-b5eddb65cb6d", 00:10:31.162 "is_configured": true, 00:10:31.162 "data_offset": 2048, 00:10:31.162 "data_size": 63488 00:10:31.162 }, 00:10:31.162 { 00:10:31.162 "name": "BaseBdev3", 00:10:31.162 "uuid": "8fc2a325-9feb-455d-a687-065176bef687", 00:10:31.162 "is_configured": true, 00:10:31.162 "data_offset": 2048, 00:10:31.162 "data_size": 63488 00:10:31.162 } 00:10:31.162 ] 00:10:31.162 } 00:10:31.162 } 00:10:31.162 }' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:31.162 BaseBdev2 00:10:31.162 BaseBdev3' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.162 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.421 [2024-12-13 08:21:43.592543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.421 [2024-12-13 08:21:43.592626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.421 [2024-12-13 08:21:43.592736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.421 [2024-12-13 08:21:43.593079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.421 [2024-12-13 08:21:43.593166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68181 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68181 ']' 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68181 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68181 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.421 killing process with pid 68181 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68181' 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68181 00:10:31.421 [2024-12-13 08:21:43.644698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.421 08:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68181 00:10:31.679 [2024-12-13 08:21:43.974989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.054 08:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:33.054 00:10:33.054 real 0m10.767s 00:10:33.054 user 0m17.064s 00:10:33.054 sys 0m1.924s 00:10:33.054 08:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.054 ************************************ 00:10:33.054 END TEST raid_state_function_test_sb 00:10:33.054 ************************************ 00:10:33.054 08:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.054 08:21:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:33.054 08:21:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:33.054 08:21:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.054 08:21:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.054 ************************************ 00:10:33.054 START TEST raid_superblock_test 00:10:33.054 ************************************ 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68801 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68801 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68801 ']' 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.054 08:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.054 [2024-12-13 08:21:45.333394] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:33.054 [2024-12-13 08:21:45.333623] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68801 ] 00:10:33.313 [2024-12-13 08:21:45.510199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.313 [2024-12-13 08:21:45.638261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.571 [2024-12-13 08:21:45.854782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.571 [2024-12-13 08:21:45.854847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.831 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.832 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.090 malloc1 00:10:34.090 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.090 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:34.090 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.090 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.090 [2024-12-13 08:21:46.237166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:34.090 [2024-12-13 08:21:46.237283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.090 [2024-12-13 08:21:46.237329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:34.090 [2024-12-13 08:21:46.237389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.090 [2024-12-13 08:21:46.239861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.090 [2024-12-13 08:21:46.239945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:34.090 pt1 00:10:34.090 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.091 malloc2 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.091 [2024-12-13 08:21:46.297925] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:34.091 [2024-12-13 08:21:46.298030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.091 [2024-12-13 08:21:46.298070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:34.091 [2024-12-13 08:21:46.298108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.091 [2024-12-13 08:21:46.300340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.091 [2024-12-13 08:21:46.300412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:34.091 pt2 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.091 malloc3 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.091 [2024-12-13 08:21:46.372715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:34.091 [2024-12-13 08:21:46.372812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.091 [2024-12-13 08:21:46.372851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:34.091 [2024-12-13 08:21:46.372880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.091 [2024-12-13 08:21:46.375626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.091 [2024-12-13 08:21:46.375736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:34.091 pt3 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.091 [2024-12-13 08:21:46.384795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:34.091 [2024-12-13 08:21:46.387174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:34.091 [2024-12-13 08:21:46.387328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:34.091 [2024-12-13 08:21:46.387635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:34.091 [2024-12-13 08:21:46.387721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.091 [2024-12-13 08:21:46.388121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:34.091 [2024-12-13 08:21:46.388417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:34.091 [2024-12-13 08:21:46.388488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:34.091 [2024-12-13 08:21:46.388778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.091 "name": "raid_bdev1", 00:10:34.091 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:34.091 "strip_size_kb": 0, 00:10:34.091 "state": "online", 00:10:34.091 "raid_level": "raid1", 00:10:34.091 "superblock": true, 00:10:34.091 "num_base_bdevs": 3, 00:10:34.091 "num_base_bdevs_discovered": 3, 00:10:34.091 "num_base_bdevs_operational": 3, 00:10:34.091 "base_bdevs_list": [ 00:10:34.091 { 00:10:34.091 "name": "pt1", 00:10:34.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.091 "is_configured": true, 00:10:34.091 "data_offset": 2048, 00:10:34.091 "data_size": 63488 00:10:34.091 }, 00:10:34.091 { 00:10:34.091 "name": "pt2", 00:10:34.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.091 "is_configured": true, 00:10:34.091 "data_offset": 2048, 00:10:34.091 "data_size": 63488 00:10:34.091 }, 00:10:34.091 { 00:10:34.091 "name": "pt3", 00:10:34.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.091 "is_configured": true, 00:10:34.091 "data_offset": 2048, 00:10:34.091 "data_size": 63488 00:10:34.091 } 00:10:34.091 ] 00:10:34.091 }' 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.091 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.658 [2024-12-13 08:21:46.860395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.658 "name": "raid_bdev1", 00:10:34.658 "aliases": [ 00:10:34.658 "1d4b6ce2-a918-4705-8795-d73be9e39756" 00:10:34.658 ], 00:10:34.658 "product_name": "Raid Volume", 00:10:34.658 "block_size": 512, 00:10:34.658 "num_blocks": 63488, 00:10:34.658 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:34.658 "assigned_rate_limits": { 00:10:34.658 "rw_ios_per_sec": 0, 00:10:34.658 "rw_mbytes_per_sec": 0, 00:10:34.658 "r_mbytes_per_sec": 0, 00:10:34.658 "w_mbytes_per_sec": 0 00:10:34.658 }, 00:10:34.658 "claimed": false, 00:10:34.658 "zoned": false, 00:10:34.658 "supported_io_types": { 00:10:34.658 "read": true, 00:10:34.658 "write": true, 00:10:34.658 "unmap": false, 00:10:34.658 "flush": false, 00:10:34.658 "reset": true, 00:10:34.658 "nvme_admin": false, 00:10:34.658 "nvme_io": false, 00:10:34.658 "nvme_io_md": false, 00:10:34.658 "write_zeroes": true, 00:10:34.658 "zcopy": false, 00:10:34.658 "get_zone_info": false, 00:10:34.658 "zone_management": false, 00:10:34.658 "zone_append": false, 00:10:34.658 "compare": false, 00:10:34.658 "compare_and_write": false, 00:10:34.658 "abort": false, 00:10:34.658 "seek_hole": false, 00:10:34.658 "seek_data": false, 00:10:34.658 "copy": false, 00:10:34.658 "nvme_iov_md": false 00:10:34.658 }, 00:10:34.658 "memory_domains": [ 00:10:34.658 { 00:10:34.658 "dma_device_id": "system", 00:10:34.658 "dma_device_type": 1 00:10:34.658 }, 00:10:34.658 { 00:10:34.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.658 "dma_device_type": 2 00:10:34.658 }, 00:10:34.658 { 00:10:34.658 "dma_device_id": "system", 00:10:34.658 "dma_device_type": 1 00:10:34.658 }, 00:10:34.658 { 00:10:34.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.658 "dma_device_type": 2 00:10:34.658 }, 00:10:34.658 { 00:10:34.658 "dma_device_id": "system", 00:10:34.658 "dma_device_type": 1 00:10:34.658 }, 00:10:34.658 { 00:10:34.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.658 "dma_device_type": 2 00:10:34.658 } 00:10:34.658 ], 00:10:34.658 "driver_specific": { 00:10:34.658 "raid": { 00:10:34.658 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:34.658 "strip_size_kb": 0, 00:10:34.658 "state": "online", 00:10:34.658 "raid_level": "raid1", 00:10:34.658 "superblock": true, 00:10:34.658 "num_base_bdevs": 3, 00:10:34.658 "num_base_bdevs_discovered": 3, 00:10:34.658 "num_base_bdevs_operational": 3, 00:10:34.658 "base_bdevs_list": [ 00:10:34.658 { 00:10:34.658 "name": "pt1", 00:10:34.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.658 "is_configured": true, 00:10:34.658 "data_offset": 2048, 00:10:34.658 "data_size": 63488 00:10:34.658 }, 00:10:34.658 { 00:10:34.658 "name": "pt2", 00:10:34.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.658 "is_configured": true, 00:10:34.658 "data_offset": 2048, 00:10:34.658 "data_size": 63488 00:10:34.658 }, 00:10:34.658 { 00:10:34.658 "name": "pt3", 00:10:34.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.658 "is_configured": true, 00:10:34.658 "data_offset": 2048, 00:10:34.658 "data_size": 63488 00:10:34.658 } 00:10:34.658 ] 00:10:34.658 } 00:10:34.658 } 00:10:34.658 }' 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.658 pt2 00:10:34.658 pt3' 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.658 08:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.658 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.917 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.917 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:34.918 [2024-12-13 08:21:47.139875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1d4b6ce2-a918-4705-8795-d73be9e39756 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1d4b6ce2-a918-4705-8795-d73be9e39756 ']' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 [2024-12-13 08:21:47.175492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.918 [2024-12-13 08:21:47.175576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.918 [2024-12-13 08:21:47.175712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.918 [2024-12-13 08:21:47.175832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.918 [2024-12-13 08:21:47.175888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.918 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.177 [2024-12-13 08:21:47.303351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:35.177 [2024-12-13 08:21:47.305457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:35.177 [2024-12-13 08:21:47.305564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:35.177 [2024-12-13 08:21:47.305676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:35.177 [2024-12-13 08:21:47.305777] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:35.177 [2024-12-13 08:21:47.305836] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:35.177 [2024-12-13 08:21:47.305908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.177 [2024-12-13 08:21:47.305941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:35.177 request: 00:10:35.177 { 00:10:35.177 "name": "raid_bdev1", 00:10:35.177 "raid_level": "raid1", 00:10:35.177 "base_bdevs": [ 00:10:35.177 "malloc1", 00:10:35.177 "malloc2", 00:10:35.177 "malloc3" 00:10:35.177 ], 00:10:35.177 "superblock": false, 00:10:35.177 "method": "bdev_raid_create", 00:10:35.177 "req_id": 1 00:10:35.177 } 00:10:35.177 Got JSON-RPC error response 00:10:35.177 response: 00:10:35.177 { 00:10:35.177 "code": -17, 00:10:35.177 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:35.177 } 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:35.177 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 [2024-12-13 08:21:47.363234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:35.178 [2024-12-13 08:21:47.363341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.178 [2024-12-13 08:21:47.363388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:35.178 [2024-12-13 08:21:47.363423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.178 [2024-12-13 08:21:47.365739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.178 [2024-12-13 08:21:47.365810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:35.178 [2024-12-13 08:21:47.365922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:35.178 [2024-12-13 08:21:47.366023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:35.178 pt1 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.178 "name": "raid_bdev1", 00:10:35.178 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:35.178 "strip_size_kb": 0, 00:10:35.178 "state": "configuring", 00:10:35.178 "raid_level": "raid1", 00:10:35.178 "superblock": true, 00:10:35.178 "num_base_bdevs": 3, 00:10:35.178 "num_base_bdevs_discovered": 1, 00:10:35.178 "num_base_bdevs_operational": 3, 00:10:35.178 "base_bdevs_list": [ 00:10:35.178 { 00:10:35.178 "name": "pt1", 00:10:35.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.178 "is_configured": true, 00:10:35.178 "data_offset": 2048, 00:10:35.178 "data_size": 63488 00:10:35.178 }, 00:10:35.178 { 00:10:35.178 "name": null, 00:10:35.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.178 "is_configured": false, 00:10:35.178 "data_offset": 2048, 00:10:35.178 "data_size": 63488 00:10:35.178 }, 00:10:35.178 { 00:10:35.178 "name": null, 00:10:35.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.178 "is_configured": false, 00:10:35.178 "data_offset": 2048, 00:10:35.178 "data_size": 63488 00:10:35.178 } 00:10:35.178 ] 00:10:35.178 }' 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.178 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.745 [2024-12-13 08:21:47.826505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.745 [2024-12-13 08:21:47.826619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.745 [2024-12-13 08:21:47.826660] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:35.745 [2024-12-13 08:21:47.826689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.745 [2024-12-13 08:21:47.827221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.745 [2024-12-13 08:21:47.827287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.745 [2024-12-13 08:21:47.827424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:35.745 [2024-12-13 08:21:47.827455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.745 pt2 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.745 [2024-12-13 08:21:47.838488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.745 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.746 "name": "raid_bdev1", 00:10:35.746 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:35.746 "strip_size_kb": 0, 00:10:35.746 "state": "configuring", 00:10:35.746 "raid_level": "raid1", 00:10:35.746 "superblock": true, 00:10:35.746 "num_base_bdevs": 3, 00:10:35.746 "num_base_bdevs_discovered": 1, 00:10:35.746 "num_base_bdevs_operational": 3, 00:10:35.746 "base_bdevs_list": [ 00:10:35.746 { 00:10:35.746 "name": "pt1", 00:10:35.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.746 "is_configured": true, 00:10:35.746 "data_offset": 2048, 00:10:35.746 "data_size": 63488 00:10:35.746 }, 00:10:35.746 { 00:10:35.746 "name": null, 00:10:35.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.746 "is_configured": false, 00:10:35.746 "data_offset": 0, 00:10:35.746 "data_size": 63488 00:10:35.746 }, 00:10:35.746 { 00:10:35.746 "name": null, 00:10:35.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.746 "is_configured": false, 00:10:35.746 "data_offset": 2048, 00:10:35.746 "data_size": 63488 00:10:35.746 } 00:10:35.746 ] 00:10:35.746 }' 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.746 08:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 [2024-12-13 08:21:48.305717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:36.005 [2024-12-13 08:21:48.305863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.005 [2024-12-13 08:21:48.305918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:36.005 [2024-12-13 08:21:48.305958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.005 [2024-12-13 08:21:48.306527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.005 [2024-12-13 08:21:48.306596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:36.005 [2024-12-13 08:21:48.306726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:36.005 [2024-12-13 08:21:48.306799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:36.005 pt2 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 [2024-12-13 08:21:48.317679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.005 [2024-12-13 08:21:48.317739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.005 [2024-12-13 08:21:48.317757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:36.005 [2024-12-13 08:21:48.317777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.005 [2024-12-13 08:21:48.318258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.005 [2024-12-13 08:21:48.318304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.005 [2024-12-13 08:21:48.318387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:36.005 [2024-12-13 08:21:48.318424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.005 [2024-12-13 08:21:48.318569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.005 [2024-12-13 08:21:48.318584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.005 [2024-12-13 08:21:48.318850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:36.005 [2024-12-13 08:21:48.319077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.005 [2024-12-13 08:21:48.319088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:36.005 [2024-12-13 08:21:48.319270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.005 pt3 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.005 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.005 "name": "raid_bdev1", 00:10:36.005 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:36.005 "strip_size_kb": 0, 00:10:36.005 "state": "online", 00:10:36.005 "raid_level": "raid1", 00:10:36.005 "superblock": true, 00:10:36.005 "num_base_bdevs": 3, 00:10:36.005 "num_base_bdevs_discovered": 3, 00:10:36.005 "num_base_bdevs_operational": 3, 00:10:36.005 "base_bdevs_list": [ 00:10:36.005 { 00:10:36.005 "name": "pt1", 00:10:36.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.005 "is_configured": true, 00:10:36.005 "data_offset": 2048, 00:10:36.005 "data_size": 63488 00:10:36.005 }, 00:10:36.005 { 00:10:36.005 "name": "pt2", 00:10:36.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.005 "is_configured": true, 00:10:36.005 "data_offset": 2048, 00:10:36.005 "data_size": 63488 00:10:36.006 }, 00:10:36.006 { 00:10:36.006 "name": "pt3", 00:10:36.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.006 "is_configured": true, 00:10:36.006 "data_offset": 2048, 00:10:36.006 "data_size": 63488 00:10:36.006 } 00:10:36.006 ] 00:10:36.006 }' 00:10:36.006 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.006 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.574 [2024-12-13 08:21:48.773346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.574 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.574 "name": "raid_bdev1", 00:10:36.574 "aliases": [ 00:10:36.574 "1d4b6ce2-a918-4705-8795-d73be9e39756" 00:10:36.575 ], 00:10:36.575 "product_name": "Raid Volume", 00:10:36.575 "block_size": 512, 00:10:36.575 "num_blocks": 63488, 00:10:36.575 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:36.575 "assigned_rate_limits": { 00:10:36.575 "rw_ios_per_sec": 0, 00:10:36.575 "rw_mbytes_per_sec": 0, 00:10:36.575 "r_mbytes_per_sec": 0, 00:10:36.575 "w_mbytes_per_sec": 0 00:10:36.575 }, 00:10:36.575 "claimed": false, 00:10:36.575 "zoned": false, 00:10:36.575 "supported_io_types": { 00:10:36.575 "read": true, 00:10:36.575 "write": true, 00:10:36.575 "unmap": false, 00:10:36.575 "flush": false, 00:10:36.575 "reset": true, 00:10:36.575 "nvme_admin": false, 00:10:36.575 "nvme_io": false, 00:10:36.575 "nvme_io_md": false, 00:10:36.575 "write_zeroes": true, 00:10:36.575 "zcopy": false, 00:10:36.575 "get_zone_info": false, 00:10:36.575 "zone_management": false, 00:10:36.575 "zone_append": false, 00:10:36.575 "compare": false, 00:10:36.575 "compare_and_write": false, 00:10:36.575 "abort": false, 00:10:36.575 "seek_hole": false, 00:10:36.575 "seek_data": false, 00:10:36.575 "copy": false, 00:10:36.575 "nvme_iov_md": false 00:10:36.575 }, 00:10:36.575 "memory_domains": [ 00:10:36.575 { 00:10:36.575 "dma_device_id": "system", 00:10:36.575 "dma_device_type": 1 00:10:36.575 }, 00:10:36.575 { 00:10:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.575 "dma_device_type": 2 00:10:36.575 }, 00:10:36.575 { 00:10:36.575 "dma_device_id": "system", 00:10:36.575 "dma_device_type": 1 00:10:36.575 }, 00:10:36.575 { 00:10:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.575 "dma_device_type": 2 00:10:36.575 }, 00:10:36.575 { 00:10:36.575 "dma_device_id": "system", 00:10:36.575 "dma_device_type": 1 00:10:36.575 }, 00:10:36.575 { 00:10:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.575 "dma_device_type": 2 00:10:36.575 } 00:10:36.575 ], 00:10:36.575 "driver_specific": { 00:10:36.575 "raid": { 00:10:36.575 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:36.575 "strip_size_kb": 0, 00:10:36.575 "state": "online", 00:10:36.575 "raid_level": "raid1", 00:10:36.575 "superblock": true, 00:10:36.575 "num_base_bdevs": 3, 00:10:36.575 "num_base_bdevs_discovered": 3, 00:10:36.575 "num_base_bdevs_operational": 3, 00:10:36.575 "base_bdevs_list": [ 00:10:36.575 { 00:10:36.575 "name": "pt1", 00:10:36.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.575 "is_configured": true, 00:10:36.575 "data_offset": 2048, 00:10:36.575 "data_size": 63488 00:10:36.575 }, 00:10:36.575 { 00:10:36.575 "name": "pt2", 00:10:36.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.575 "is_configured": true, 00:10:36.575 "data_offset": 2048, 00:10:36.575 "data_size": 63488 00:10:36.575 }, 00:10:36.575 { 00:10:36.575 "name": "pt3", 00:10:36.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.575 "is_configured": true, 00:10:36.575 "data_offset": 2048, 00:10:36.575 "data_size": 63488 00:10:36.575 } 00:10:36.575 ] 00:10:36.575 } 00:10:36.575 } 00:10:36.575 }' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.575 pt2 00:10:36.575 pt3' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.575 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.834 08:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:36.834 [2024-12-13 08:21:49.024817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1d4b6ce2-a918-4705-8795-d73be9e39756 '!=' 1d4b6ce2-a918-4705-8795-d73be9e39756 ']' 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.834 [2024-12-13 08:21:49.072501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.834 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.834 "name": "raid_bdev1", 00:10:36.834 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:36.834 "strip_size_kb": 0, 00:10:36.834 "state": "online", 00:10:36.834 "raid_level": "raid1", 00:10:36.834 "superblock": true, 00:10:36.834 "num_base_bdevs": 3, 00:10:36.835 "num_base_bdevs_discovered": 2, 00:10:36.835 "num_base_bdevs_operational": 2, 00:10:36.835 "base_bdevs_list": [ 00:10:36.835 { 00:10:36.835 "name": null, 00:10:36.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.835 "is_configured": false, 00:10:36.835 "data_offset": 0, 00:10:36.835 "data_size": 63488 00:10:36.835 }, 00:10:36.835 { 00:10:36.835 "name": "pt2", 00:10:36.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.835 "is_configured": true, 00:10:36.835 "data_offset": 2048, 00:10:36.835 "data_size": 63488 00:10:36.835 }, 00:10:36.835 { 00:10:36.835 "name": "pt3", 00:10:36.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.835 "is_configured": true, 00:10:36.835 "data_offset": 2048, 00:10:36.835 "data_size": 63488 00:10:36.835 } 00:10:36.835 ] 00:10:36.835 }' 00:10:36.835 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.835 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 [2024-12-13 08:21:49.519683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.405 [2024-12-13 08:21:49.519767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.405 [2024-12-13 08:21:49.519879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.405 [2024-12-13 08:21:49.519972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.405 [2024-12-13 08:21:49.520028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 [2024-12-13 08:21:49.603503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.405 [2024-12-13 08:21:49.603624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.405 [2024-12-13 08:21:49.603667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:37.405 [2024-12-13 08:21:49.603712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.405 [2024-12-13 08:21:49.606269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.405 [2024-12-13 08:21:49.606356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.405 [2024-12-13 08:21:49.606479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:37.405 [2024-12-13 08:21:49.606567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.405 pt2 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.405 "name": "raid_bdev1", 00:10:37.405 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:37.405 "strip_size_kb": 0, 00:10:37.405 "state": "configuring", 00:10:37.405 "raid_level": "raid1", 00:10:37.405 "superblock": true, 00:10:37.405 "num_base_bdevs": 3, 00:10:37.405 "num_base_bdevs_discovered": 1, 00:10:37.405 "num_base_bdevs_operational": 2, 00:10:37.405 "base_bdevs_list": [ 00:10:37.405 { 00:10:37.405 "name": null, 00:10:37.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.405 "is_configured": false, 00:10:37.405 "data_offset": 2048, 00:10:37.405 "data_size": 63488 00:10:37.405 }, 00:10:37.405 { 00:10:37.405 "name": "pt2", 00:10:37.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.405 "is_configured": true, 00:10:37.405 "data_offset": 2048, 00:10:37.405 "data_size": 63488 00:10:37.405 }, 00:10:37.405 { 00:10:37.405 "name": null, 00:10:37.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.405 "is_configured": false, 00:10:37.405 "data_offset": 2048, 00:10:37.405 "data_size": 63488 00:10:37.405 } 00:10:37.405 ] 00:10:37.405 }' 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.405 08:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.665 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:37.665 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:37.665 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:37.665 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:37.665 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.665 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.665 [2024-12-13 08:21:50.026862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:37.665 [2024-12-13 08:21:50.027055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.665 [2024-12-13 08:21:50.027097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:37.665 [2024-12-13 08:21:50.027146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.665 [2024-12-13 08:21:50.027701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.924 [2024-12-13 08:21:50.027780] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:37.924 [2024-12-13 08:21:50.027894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:37.924 [2024-12-13 08:21:50.027926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:37.924 [2024-12-13 08:21:50.028050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:37.924 [2024-12-13 08:21:50.028063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:37.924 [2024-12-13 08:21:50.028380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:37.924 [2024-12-13 08:21:50.028549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:37.924 [2024-12-13 08:21:50.028560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:37.924 [2024-12-13 08:21:50.028715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.924 pt3 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.924 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.925 "name": "raid_bdev1", 00:10:37.925 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:37.925 "strip_size_kb": 0, 00:10:37.925 "state": "online", 00:10:37.925 "raid_level": "raid1", 00:10:37.925 "superblock": true, 00:10:37.925 "num_base_bdevs": 3, 00:10:37.925 "num_base_bdevs_discovered": 2, 00:10:37.925 "num_base_bdevs_operational": 2, 00:10:37.925 "base_bdevs_list": [ 00:10:37.925 { 00:10:37.925 "name": null, 00:10:37.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.925 "is_configured": false, 00:10:37.925 "data_offset": 2048, 00:10:37.925 "data_size": 63488 00:10:37.925 }, 00:10:37.925 { 00:10:37.925 "name": "pt2", 00:10:37.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.925 "is_configured": true, 00:10:37.925 "data_offset": 2048, 00:10:37.925 "data_size": 63488 00:10:37.925 }, 00:10:37.925 { 00:10:37.925 "name": "pt3", 00:10:37.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.925 "is_configured": true, 00:10:37.925 "data_offset": 2048, 00:10:37.925 "data_size": 63488 00:10:37.925 } 00:10:37.925 ] 00:10:37.925 }' 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.925 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.185 [2024-12-13 08:21:50.462112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.185 [2024-12-13 08:21:50.462186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.185 [2024-12-13 08:21:50.462287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.185 [2024-12-13 08:21:50.462372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.185 [2024-12-13 08:21:50.462389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.185 [2024-12-13 08:21:50.525997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.185 [2024-12-13 08:21:50.526109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.185 [2024-12-13 08:21:50.526148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:38.185 [2024-12-13 08:21:50.526177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.185 [2024-12-13 08:21:50.528577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.185 [2024-12-13 08:21:50.528612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.185 [2024-12-13 08:21:50.528697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:38.185 [2024-12-13 08:21:50.528750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.185 [2024-12-13 08:21:50.528871] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:38.185 [2024-12-13 08:21:50.528887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.185 [2024-12-13 08:21:50.528904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:38.185 [2024-12-13 08:21:50.528972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.185 pt1 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.185 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.444 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.444 "name": "raid_bdev1", 00:10:38.444 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:38.444 "strip_size_kb": 0, 00:10:38.444 "state": "configuring", 00:10:38.444 "raid_level": "raid1", 00:10:38.444 "superblock": true, 00:10:38.444 "num_base_bdevs": 3, 00:10:38.444 "num_base_bdevs_discovered": 1, 00:10:38.444 "num_base_bdevs_operational": 2, 00:10:38.444 "base_bdevs_list": [ 00:10:38.444 { 00:10:38.444 "name": null, 00:10:38.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.445 "is_configured": false, 00:10:38.445 "data_offset": 2048, 00:10:38.445 "data_size": 63488 00:10:38.445 }, 00:10:38.445 { 00:10:38.445 "name": "pt2", 00:10:38.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.445 "is_configured": true, 00:10:38.445 "data_offset": 2048, 00:10:38.445 "data_size": 63488 00:10:38.445 }, 00:10:38.445 { 00:10:38.445 "name": null, 00:10:38.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.445 "is_configured": false, 00:10:38.445 "data_offset": 2048, 00:10:38.445 "data_size": 63488 00:10:38.445 } 00:10:38.445 ] 00:10:38.445 }' 00:10:38.445 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.445 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.705 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:38.705 08:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:38.705 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.705 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.705 08:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.705 [2024-12-13 08:21:51.017261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.705 [2024-12-13 08:21:51.017453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.705 [2024-12-13 08:21:51.017504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:38.705 [2024-12-13 08:21:51.017551] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.705 [2024-12-13 08:21:51.018143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.705 [2024-12-13 08:21:51.018212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.705 [2024-12-13 08:21:51.018348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:38.705 [2024-12-13 08:21:51.018408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.705 [2024-12-13 08:21:51.018600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:38.705 [2024-12-13 08:21:51.018644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.705 [2024-12-13 08:21:51.018956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:38.705 [2024-12-13 08:21:51.019201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:38.705 [2024-12-13 08:21:51.019261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:38.705 [2024-12-13 08:21:51.019483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.705 pt3 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.705 "name": "raid_bdev1", 00:10:38.705 "uuid": "1d4b6ce2-a918-4705-8795-d73be9e39756", 00:10:38.705 "strip_size_kb": 0, 00:10:38.705 "state": "online", 00:10:38.705 "raid_level": "raid1", 00:10:38.705 "superblock": true, 00:10:38.705 "num_base_bdevs": 3, 00:10:38.705 "num_base_bdevs_discovered": 2, 00:10:38.705 "num_base_bdevs_operational": 2, 00:10:38.705 "base_bdevs_list": [ 00:10:38.705 { 00:10:38.705 "name": null, 00:10:38.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.705 "is_configured": false, 00:10:38.705 "data_offset": 2048, 00:10:38.705 "data_size": 63488 00:10:38.705 }, 00:10:38.705 { 00:10:38.705 "name": "pt2", 00:10:38.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.705 "is_configured": true, 00:10:38.705 "data_offset": 2048, 00:10:38.705 "data_size": 63488 00:10:38.705 }, 00:10:38.705 { 00:10:38.705 "name": "pt3", 00:10:38.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.705 "is_configured": true, 00:10:38.705 "data_offset": 2048, 00:10:38.705 "data_size": 63488 00:10:38.705 } 00:10:38.705 ] 00:10:38.705 }' 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.705 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.274 [2024-12-13 08:21:51.492779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.274 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1d4b6ce2-a918-4705-8795-d73be9e39756 '!=' 1d4b6ce2-a918-4705-8795-d73be9e39756 ']' 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68801 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68801 ']' 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68801 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68801 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68801' 00:10:39.275 killing process with pid 68801 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68801 00:10:39.275 [2024-12-13 08:21:51.569879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.275 08:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68801 00:10:39.275 [2024-12-13 08:21:51.570043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.275 [2024-12-13 08:21:51.570138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.275 [2024-12-13 08:21:51.570153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:39.842 [2024-12-13 08:21:51.899417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.777 08:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:40.777 00:10:40.777 real 0m7.902s 00:10:40.777 user 0m12.269s 00:10:40.777 sys 0m1.357s 00:10:40.777 08:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.777 08:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.777 ************************************ 00:10:40.777 END TEST raid_superblock_test 00:10:40.777 ************************************ 00:10:41.044 08:21:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:41.044 08:21:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:41.044 08:21:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.044 08:21:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.044 ************************************ 00:10:41.044 START TEST raid_read_error_test 00:10:41.044 ************************************ 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GBpASvRXez 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69247 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69247 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69247 ']' 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.044 08:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.044 [2024-12-13 08:21:53.271759] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:41.044 [2024-12-13 08:21:53.271941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69247 ] 00:10:41.333 [2024-12-13 08:21:53.447453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.333 [2024-12-13 08:21:53.571987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.593 [2024-12-13 08:21:53.799253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.593 [2024-12-13 08:21:53.799392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.852 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.852 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:41.852 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.852 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:41.852 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.852 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 BaseBdev1_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 true 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 [2024-12-13 08:21:54.257452] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:42.112 [2024-12-13 08:21:54.257553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.112 [2024-12-13 08:21:54.257592] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:42.112 [2024-12-13 08:21:54.257671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.112 [2024-12-13 08:21:54.259984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.112 [2024-12-13 08:21:54.260071] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:42.112 BaseBdev1 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 BaseBdev2_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 true 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 [2024-12-13 08:21:54.325166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:42.112 [2024-12-13 08:21:54.325296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.112 [2024-12-13 08:21:54.325335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:42.112 [2024-12-13 08:21:54.325371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.112 [2024-12-13 08:21:54.327644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.112 [2024-12-13 08:21:54.327733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:42.112 BaseBdev2 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 BaseBdev3_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 true 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.112 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.112 [2024-12-13 08:21:54.421645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:42.112 [2024-12-13 08:21:54.421702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.112 [2024-12-13 08:21:54.421720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:42.112 [2024-12-13 08:21:54.421731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.112 [2024-12-13 08:21:54.423829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.112 [2024-12-13 08:21:54.423869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:42.112 BaseBdev3 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.113 [2024-12-13 08:21:54.433690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.113 [2024-12-13 08:21:54.435528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.113 [2024-12-13 08:21:54.435647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.113 [2024-12-13 08:21:54.435908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:42.113 [2024-12-13 08:21:54.435956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:42.113 [2024-12-13 08:21:54.436240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:42.113 [2024-12-13 08:21:54.436459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:42.113 [2024-12-13 08:21:54.436504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:42.113 [2024-12-13 08:21:54.436693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.113 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.372 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.372 "name": "raid_bdev1", 00:10:42.372 "uuid": "bc057ff2-913d-4833-b06c-907737a48c09", 00:10:42.372 "strip_size_kb": 0, 00:10:42.372 "state": "online", 00:10:42.372 "raid_level": "raid1", 00:10:42.372 "superblock": true, 00:10:42.372 "num_base_bdevs": 3, 00:10:42.372 "num_base_bdevs_discovered": 3, 00:10:42.372 "num_base_bdevs_operational": 3, 00:10:42.372 "base_bdevs_list": [ 00:10:42.372 { 00:10:42.372 "name": "BaseBdev1", 00:10:42.372 "uuid": "2bef7df2-413b-51ae-8a00-73647bb590ee", 00:10:42.372 "is_configured": true, 00:10:42.372 "data_offset": 2048, 00:10:42.372 "data_size": 63488 00:10:42.372 }, 00:10:42.372 { 00:10:42.372 "name": "BaseBdev2", 00:10:42.372 "uuid": "09b5dcdb-a123-5d19-b5a4-3c1eb357011f", 00:10:42.372 "is_configured": true, 00:10:42.372 "data_offset": 2048, 00:10:42.372 "data_size": 63488 00:10:42.372 }, 00:10:42.372 { 00:10:42.372 "name": "BaseBdev3", 00:10:42.372 "uuid": "7acaace8-b829-573e-bf44-b2223c609c90", 00:10:42.372 "is_configured": true, 00:10:42.372 "data_offset": 2048, 00:10:42.372 "data_size": 63488 00:10:42.372 } 00:10:42.372 ] 00:10:42.372 }' 00:10:42.372 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.372 08:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.631 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:42.631 08:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:42.893 [2024-12-13 08:21:55.034207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.833 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.834 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.834 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.834 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.834 08:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.834 08:21:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.834 08:21:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.834 08:21:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.834 08:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.834 "name": "raid_bdev1", 00:10:43.834 "uuid": "bc057ff2-913d-4833-b06c-907737a48c09", 00:10:43.834 "strip_size_kb": 0, 00:10:43.834 "state": "online", 00:10:43.834 "raid_level": "raid1", 00:10:43.834 "superblock": true, 00:10:43.834 "num_base_bdevs": 3, 00:10:43.834 "num_base_bdevs_discovered": 3, 00:10:43.834 "num_base_bdevs_operational": 3, 00:10:43.834 "base_bdevs_list": [ 00:10:43.834 { 00:10:43.834 "name": "BaseBdev1", 00:10:43.834 "uuid": "2bef7df2-413b-51ae-8a00-73647bb590ee", 00:10:43.834 "is_configured": true, 00:10:43.834 "data_offset": 2048, 00:10:43.834 "data_size": 63488 00:10:43.834 }, 00:10:43.834 { 00:10:43.834 "name": "BaseBdev2", 00:10:43.834 "uuid": "09b5dcdb-a123-5d19-b5a4-3c1eb357011f", 00:10:43.834 "is_configured": true, 00:10:43.834 "data_offset": 2048, 00:10:43.834 "data_size": 63488 00:10:43.834 }, 00:10:43.834 { 00:10:43.834 "name": "BaseBdev3", 00:10:43.834 "uuid": "7acaace8-b829-573e-bf44-b2223c609c90", 00:10:43.834 "is_configured": true, 00:10:43.834 "data_offset": 2048, 00:10:43.834 "data_size": 63488 00:10:43.834 } 00:10:43.834 ] 00:10:43.834 }' 00:10:43.834 08:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.834 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.093 [2024-12-13 08:21:56.438190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.093 [2024-12-13 08:21:56.438284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.093 { 00:10:44.093 "results": [ 00:10:44.093 { 00:10:44.093 "job": "raid_bdev1", 00:10:44.093 "core_mask": "0x1", 00:10:44.093 "workload": "randrw", 00:10:44.093 "percentage": 50, 00:10:44.093 "status": "finished", 00:10:44.093 "queue_depth": 1, 00:10:44.093 "io_size": 131072, 00:10:44.093 "runtime": 1.404882, 00:10:44.093 "iops": 12164.010927608155, 00:10:44.093 "mibps": 1520.5013659510194, 00:10:44.093 "io_failed": 0, 00:10:44.093 "io_timeout": 0, 00:10:44.093 "avg_latency_us": 79.21143829338364, 00:10:44.093 "min_latency_us": 24.929257641921396, 00:10:44.093 "max_latency_us": 1695.6366812227075 00:10:44.093 } 00:10:44.093 ], 00:10:44.093 "core_count": 1 00:10:44.093 } 00:10:44.093 [2024-12-13 08:21:56.441309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.093 [2024-12-13 08:21:56.441358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.093 [2024-12-13 08:21:56.441461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.093 [2024-12-13 08:21:56.441471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69247 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69247 ']' 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69247 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.093 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69247 00:10:44.353 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.353 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.353 killing process with pid 69247 00:10:44.353 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69247' 00:10:44.353 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69247 00:10:44.353 [2024-12-13 08:21:56.487080] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.353 08:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69247 00:10:44.611 [2024-12-13 08:21:56.738757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GBpASvRXez 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:45.990 00:10:45.990 real 0m4.830s 00:10:45.990 user 0m5.804s 00:10:45.990 sys 0m0.595s 00:10:45.990 ************************************ 00:10:45.990 END TEST raid_read_error_test 00:10:45.990 ************************************ 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.990 08:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.990 08:21:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:45.990 08:21:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.990 08:21:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.990 08:21:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.990 ************************************ 00:10:45.990 START TEST raid_write_error_test 00:10:45.990 ************************************ 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TCnqTNxli8 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69394 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69394 00:10:45.990 08:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:45.991 08:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69394 ']' 00:10:45.991 08:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.991 08:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.991 08:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.991 08:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.991 08:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.991 [2024-12-13 08:21:58.202035] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:45.991 [2024-12-13 08:21:58.202175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69394 ] 00:10:46.250 [2024-12-13 08:21:58.358160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.250 [2024-12-13 08:21:58.483703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.510 [2024-12-13 08:21:58.704498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.510 [2024-12-13 08:21:58.704664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.769 BaseBdev1_malloc 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.769 true 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.769 [2024-12-13 08:21:59.125854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:46.769 [2024-12-13 08:21:59.125971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.769 [2024-12-13 08:21:59.126011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:46.769 [2024-12-13 08:21:59.126061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.769 [2024-12-13 08:21:59.128284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.769 [2024-12-13 08:21:59.128348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:46.769 BaseBdev1 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.769 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.029 BaseBdev2_malloc 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.030 true 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.030 [2024-12-13 08:21:59.183593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:47.030 [2024-12-13 08:21:59.183716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.030 [2024-12-13 08:21:59.183766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:47.030 [2024-12-13 08:21:59.183804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.030 [2024-12-13 08:21:59.185945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.030 [2024-12-13 08:21:59.186018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:47.030 BaseBdev2 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.030 BaseBdev3_malloc 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.030 true 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.030 [2024-12-13 08:21:59.264458] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:47.030 [2024-12-13 08:21:59.264522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.030 [2024-12-13 08:21:59.264546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:47.030 [2024-12-13 08:21:59.264557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.030 [2024-12-13 08:21:59.266875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.030 [2024-12-13 08:21:59.266925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:47.030 BaseBdev3 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.030 [2024-12-13 08:21:59.276509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.030 [2024-12-13 08:21:59.278535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.030 [2024-12-13 08:21:59.278688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.030 [2024-12-13 08:21:59.279014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:47.030 [2024-12-13 08:21:59.279077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.030 [2024-12-13 08:21:59.279447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:47.030 [2024-12-13 08:21:59.279674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:47.030 [2024-12-13 08:21:59.279723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:47.030 [2024-12-13 08:21:59.279947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.030 "name": "raid_bdev1", 00:10:47.030 "uuid": "33689d82-bb69-43a8-987d-ecd9ed08dbaf", 00:10:47.030 "strip_size_kb": 0, 00:10:47.030 "state": "online", 00:10:47.030 "raid_level": "raid1", 00:10:47.030 "superblock": true, 00:10:47.030 "num_base_bdevs": 3, 00:10:47.030 "num_base_bdevs_discovered": 3, 00:10:47.030 "num_base_bdevs_operational": 3, 00:10:47.030 "base_bdevs_list": [ 00:10:47.030 { 00:10:47.030 "name": "BaseBdev1", 00:10:47.030 "uuid": "78c90154-c95b-59a8-9532-4f1f770ce3a0", 00:10:47.030 "is_configured": true, 00:10:47.030 "data_offset": 2048, 00:10:47.030 "data_size": 63488 00:10:47.030 }, 00:10:47.030 { 00:10:47.030 "name": "BaseBdev2", 00:10:47.030 "uuid": "af7190d9-923b-599e-a979-c999879eaccd", 00:10:47.030 "is_configured": true, 00:10:47.030 "data_offset": 2048, 00:10:47.030 "data_size": 63488 00:10:47.030 }, 00:10:47.030 { 00:10:47.030 "name": "BaseBdev3", 00:10:47.030 "uuid": "4b223a4e-5b5e-54a3-bca7-9fd02e59cdf8", 00:10:47.030 "is_configured": true, 00:10:47.030 "data_offset": 2048, 00:10:47.030 "data_size": 63488 00:10:47.030 } 00:10:47.030 ] 00:10:47.030 }' 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.030 08:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.600 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:47.600 08:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:47.600 [2024-12-13 08:21:59.828928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 [2024-12-13 08:22:00.737192] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:48.537 [2024-12-13 08:22:00.737385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.537 [2024-12-13 08:22:00.737668] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.537 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.537 "name": "raid_bdev1", 00:10:48.537 "uuid": "33689d82-bb69-43a8-987d-ecd9ed08dbaf", 00:10:48.537 "strip_size_kb": 0, 00:10:48.537 "state": "online", 00:10:48.537 "raid_level": "raid1", 00:10:48.537 "superblock": true, 00:10:48.537 "num_base_bdevs": 3, 00:10:48.537 "num_base_bdevs_discovered": 2, 00:10:48.537 "num_base_bdevs_operational": 2, 00:10:48.537 "base_bdevs_list": [ 00:10:48.537 { 00:10:48.537 "name": null, 00:10:48.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.537 "is_configured": false, 00:10:48.537 "data_offset": 0, 00:10:48.537 "data_size": 63488 00:10:48.537 }, 00:10:48.537 { 00:10:48.537 "name": "BaseBdev2", 00:10:48.537 "uuid": "af7190d9-923b-599e-a979-c999879eaccd", 00:10:48.537 "is_configured": true, 00:10:48.538 "data_offset": 2048, 00:10:48.538 "data_size": 63488 00:10:48.538 }, 00:10:48.538 { 00:10:48.538 "name": "BaseBdev3", 00:10:48.538 "uuid": "4b223a4e-5b5e-54a3-bca7-9fd02e59cdf8", 00:10:48.538 "is_configured": true, 00:10:48.538 "data_offset": 2048, 00:10:48.538 "data_size": 63488 00:10:48.538 } 00:10:48.538 ] 00:10:48.538 }' 00:10:48.538 08:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.538 08:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.106 [2024-12-13 08:22:01.227820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.106 [2024-12-13 08:22:01.227923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.106 [2024-12-13 08:22:01.230746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.106 [2024-12-13 08:22:01.230855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.106 [2024-12-13 08:22:01.230984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.106 [2024-12-13 08:22:01.231047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:49.106 { 00:10:49.106 "results": [ 00:10:49.106 { 00:10:49.106 "job": "raid_bdev1", 00:10:49.106 "core_mask": "0x1", 00:10:49.106 "workload": "randrw", 00:10:49.106 "percentage": 50, 00:10:49.106 "status": "finished", 00:10:49.106 "queue_depth": 1, 00:10:49.106 "io_size": 131072, 00:10:49.106 "runtime": 1.399798, 00:10:49.106 "iops": 13236.19550820904, 00:10:49.106 "mibps": 1654.52443852613, 00:10:49.106 "io_failed": 0, 00:10:49.106 "io_timeout": 0, 00:10:49.106 "avg_latency_us": 72.53166806193482, 00:10:49.106 "min_latency_us": 24.817467248908297, 00:10:49.106 "max_latency_us": 1495.3082969432314 00:10:49.106 } 00:10:49.106 ], 00:10:49.106 "core_count": 1 00:10:49.106 } 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69394 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69394 ']' 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69394 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.106 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69394 00:10:49.107 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.107 killing process with pid 69394 00:10:49.107 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.107 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69394' 00:10:49.107 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69394 00:10:49.107 [2024-12-13 08:22:01.278668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.107 08:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69394 00:10:49.366 [2024-12-13 08:22:01.527525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TCnqTNxli8 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:50.746 00:10:50.746 real 0m4.763s 00:10:50.746 user 0m5.683s 00:10:50.746 sys 0m0.575s 00:10:50.746 ************************************ 00:10:50.746 END TEST raid_write_error_test 00:10:50.746 ************************************ 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.746 08:22:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.746 08:22:02 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:50.746 08:22:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:50.746 08:22:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:50.746 08:22:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.746 08:22:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.746 08:22:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.746 ************************************ 00:10:50.746 START TEST raid_state_function_test 00:10:50.746 ************************************ 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69536 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:50.746 Process raid pid: 69536 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69536' 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69536 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69536 ']' 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.746 08:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.746 [2024-12-13 08:22:03.031571] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:50.747 [2024-12-13 08:22:03.031732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.005 [2024-12-13 08:22:03.213475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.005 [2024-12-13 08:22:03.341735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.264 [2024-12-13 08:22:03.561503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.264 [2024-12-13 08:22:03.561548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.831 [2024-12-13 08:22:03.908259] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.831 [2024-12-13 08:22:03.908367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.831 [2024-12-13 08:22:03.908398] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.831 [2024-12-13 08:22:03.908423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.831 [2024-12-13 08:22:03.908442] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:51.831 [2024-12-13 08:22:03.908464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:51.831 [2024-12-13 08:22:03.908482] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:51.831 [2024-12-13 08:22:03.908502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.831 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.831 "name": "Existed_Raid", 00:10:51.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.831 "strip_size_kb": 64, 00:10:51.831 "state": "configuring", 00:10:51.831 "raid_level": "raid0", 00:10:51.832 "superblock": false, 00:10:51.832 "num_base_bdevs": 4, 00:10:51.832 "num_base_bdevs_discovered": 0, 00:10:51.832 "num_base_bdevs_operational": 4, 00:10:51.832 "base_bdevs_list": [ 00:10:51.832 { 00:10:51.832 "name": "BaseBdev1", 00:10:51.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.832 "is_configured": false, 00:10:51.832 "data_offset": 0, 00:10:51.832 "data_size": 0 00:10:51.832 }, 00:10:51.832 { 00:10:51.832 "name": "BaseBdev2", 00:10:51.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.832 "is_configured": false, 00:10:51.832 "data_offset": 0, 00:10:51.832 "data_size": 0 00:10:51.832 }, 00:10:51.832 { 00:10:51.832 "name": "BaseBdev3", 00:10:51.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.832 "is_configured": false, 00:10:51.832 "data_offset": 0, 00:10:51.832 "data_size": 0 00:10:51.832 }, 00:10:51.832 { 00:10:51.832 "name": "BaseBdev4", 00:10:51.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.832 "is_configured": false, 00:10:51.832 "data_offset": 0, 00:10:51.832 "data_size": 0 00:10:51.832 } 00:10:51.832 ] 00:10:51.832 }' 00:10:51.832 08:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.832 08:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.090 [2024-12-13 08:22:04.407352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.090 [2024-12-13 08:22:04.407452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.090 [2024-12-13 08:22:04.415366] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.090 [2024-12-13 08:22:04.415458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.090 [2024-12-13 08:22:04.415503] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.090 [2024-12-13 08:22:04.415541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.090 [2024-12-13 08:22:04.415589] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.090 [2024-12-13 08:22:04.415625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.090 [2024-12-13 08:22:04.415656] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.090 [2024-12-13 08:22:04.415691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.090 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.349 [2024-12-13 08:22:04.460923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.349 BaseBdev1 00:10:52.349 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.349 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.349 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.349 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.349 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.349 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.350 [ 00:10:52.350 { 00:10:52.350 "name": "BaseBdev1", 00:10:52.350 "aliases": [ 00:10:52.350 "7cac4acd-6cc8-4963-8869-b710227632f9" 00:10:52.350 ], 00:10:52.350 "product_name": "Malloc disk", 00:10:52.350 "block_size": 512, 00:10:52.350 "num_blocks": 65536, 00:10:52.350 "uuid": "7cac4acd-6cc8-4963-8869-b710227632f9", 00:10:52.350 "assigned_rate_limits": { 00:10:52.350 "rw_ios_per_sec": 0, 00:10:52.350 "rw_mbytes_per_sec": 0, 00:10:52.350 "r_mbytes_per_sec": 0, 00:10:52.350 "w_mbytes_per_sec": 0 00:10:52.350 }, 00:10:52.350 "claimed": true, 00:10:52.350 "claim_type": "exclusive_write", 00:10:52.350 "zoned": false, 00:10:52.350 "supported_io_types": { 00:10:52.350 "read": true, 00:10:52.350 "write": true, 00:10:52.350 "unmap": true, 00:10:52.350 "flush": true, 00:10:52.350 "reset": true, 00:10:52.350 "nvme_admin": false, 00:10:52.350 "nvme_io": false, 00:10:52.350 "nvme_io_md": false, 00:10:52.350 "write_zeroes": true, 00:10:52.350 "zcopy": true, 00:10:52.350 "get_zone_info": false, 00:10:52.350 "zone_management": false, 00:10:52.350 "zone_append": false, 00:10:52.350 "compare": false, 00:10:52.350 "compare_and_write": false, 00:10:52.350 "abort": true, 00:10:52.350 "seek_hole": false, 00:10:52.350 "seek_data": false, 00:10:52.350 "copy": true, 00:10:52.350 "nvme_iov_md": false 00:10:52.350 }, 00:10:52.350 "memory_domains": [ 00:10:52.350 { 00:10:52.350 "dma_device_id": "system", 00:10:52.350 "dma_device_type": 1 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.350 "dma_device_type": 2 00:10:52.350 } 00:10:52.350 ], 00:10:52.350 "driver_specific": {} 00:10:52.350 } 00:10:52.350 ] 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.350 "name": "Existed_Raid", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.350 "strip_size_kb": 64, 00:10:52.350 "state": "configuring", 00:10:52.350 "raid_level": "raid0", 00:10:52.350 "superblock": false, 00:10:52.350 "num_base_bdevs": 4, 00:10:52.350 "num_base_bdevs_discovered": 1, 00:10:52.350 "num_base_bdevs_operational": 4, 00:10:52.350 "base_bdevs_list": [ 00:10:52.350 { 00:10:52.350 "name": "BaseBdev1", 00:10:52.350 "uuid": "7cac4acd-6cc8-4963-8869-b710227632f9", 00:10:52.350 "is_configured": true, 00:10:52.350 "data_offset": 0, 00:10:52.350 "data_size": 65536 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "name": "BaseBdev2", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.350 "is_configured": false, 00:10:52.350 "data_offset": 0, 00:10:52.350 "data_size": 0 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "name": "BaseBdev3", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.350 "is_configured": false, 00:10:52.350 "data_offset": 0, 00:10:52.350 "data_size": 0 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "name": "BaseBdev4", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.350 "is_configured": false, 00:10:52.350 "data_offset": 0, 00:10:52.350 "data_size": 0 00:10:52.350 } 00:10:52.350 ] 00:10:52.350 }' 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.350 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.918 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.918 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.918 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.918 [2024-12-13 08:22:04.988135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.918 [2024-12-13 08:22:04.988246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:52.918 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.919 [2024-12-13 08:22:04.996161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.919 [2024-12-13 08:22:04.998275] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.919 [2024-12-13 08:22:04.998371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.919 [2024-12-13 08:22:04.998410] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.919 [2024-12-13 08:22:04.998443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.919 [2024-12-13 08:22:04.998468] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.919 [2024-12-13 08:22:04.998495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.919 08:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.919 "name": "Existed_Raid", 00:10:52.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.919 "strip_size_kb": 64, 00:10:52.919 "state": "configuring", 00:10:52.919 "raid_level": "raid0", 00:10:52.919 "superblock": false, 00:10:52.919 "num_base_bdevs": 4, 00:10:52.919 "num_base_bdevs_discovered": 1, 00:10:52.919 "num_base_bdevs_operational": 4, 00:10:52.919 "base_bdevs_list": [ 00:10:52.919 { 00:10:52.919 "name": "BaseBdev1", 00:10:52.919 "uuid": "7cac4acd-6cc8-4963-8869-b710227632f9", 00:10:52.919 "is_configured": true, 00:10:52.919 "data_offset": 0, 00:10:52.919 "data_size": 65536 00:10:52.919 }, 00:10:52.919 { 00:10:52.919 "name": "BaseBdev2", 00:10:52.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.919 "is_configured": false, 00:10:52.919 "data_offset": 0, 00:10:52.919 "data_size": 0 00:10:52.919 }, 00:10:52.919 { 00:10:52.919 "name": "BaseBdev3", 00:10:52.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.919 "is_configured": false, 00:10:52.919 "data_offset": 0, 00:10:52.919 "data_size": 0 00:10:52.919 }, 00:10:52.919 { 00:10:52.919 "name": "BaseBdev4", 00:10:52.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.919 "is_configured": false, 00:10:52.919 "data_offset": 0, 00:10:52.919 "data_size": 0 00:10:52.919 } 00:10:52.919 ] 00:10:52.919 }' 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.919 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.177 [2024-12-13 08:22:05.477446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.177 BaseBdev2 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.177 [ 00:10:53.177 { 00:10:53.177 "name": "BaseBdev2", 00:10:53.177 "aliases": [ 00:10:53.177 "03af0192-f3fb-4247-811d-e7ed81898e29" 00:10:53.177 ], 00:10:53.177 "product_name": "Malloc disk", 00:10:53.177 "block_size": 512, 00:10:53.177 "num_blocks": 65536, 00:10:53.177 "uuid": "03af0192-f3fb-4247-811d-e7ed81898e29", 00:10:53.177 "assigned_rate_limits": { 00:10:53.177 "rw_ios_per_sec": 0, 00:10:53.177 "rw_mbytes_per_sec": 0, 00:10:53.177 "r_mbytes_per_sec": 0, 00:10:53.177 "w_mbytes_per_sec": 0 00:10:53.177 }, 00:10:53.177 "claimed": true, 00:10:53.177 "claim_type": "exclusive_write", 00:10:53.177 "zoned": false, 00:10:53.177 "supported_io_types": { 00:10:53.177 "read": true, 00:10:53.177 "write": true, 00:10:53.177 "unmap": true, 00:10:53.177 "flush": true, 00:10:53.177 "reset": true, 00:10:53.177 "nvme_admin": false, 00:10:53.177 "nvme_io": false, 00:10:53.177 "nvme_io_md": false, 00:10:53.177 "write_zeroes": true, 00:10:53.177 "zcopy": true, 00:10:53.177 "get_zone_info": false, 00:10:53.177 "zone_management": false, 00:10:53.177 "zone_append": false, 00:10:53.177 "compare": false, 00:10:53.177 "compare_and_write": false, 00:10:53.177 "abort": true, 00:10:53.177 "seek_hole": false, 00:10:53.177 "seek_data": false, 00:10:53.177 "copy": true, 00:10:53.177 "nvme_iov_md": false 00:10:53.177 }, 00:10:53.177 "memory_domains": [ 00:10:53.177 { 00:10:53.177 "dma_device_id": "system", 00:10:53.177 "dma_device_type": 1 00:10:53.177 }, 00:10:53.177 { 00:10:53.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.177 "dma_device_type": 2 00:10:53.177 } 00:10:53.177 ], 00:10:53.177 "driver_specific": {} 00:10:53.177 } 00:10:53.177 ] 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.177 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.178 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.436 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.436 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.436 "name": "Existed_Raid", 00:10:53.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.436 "strip_size_kb": 64, 00:10:53.436 "state": "configuring", 00:10:53.436 "raid_level": "raid0", 00:10:53.436 "superblock": false, 00:10:53.436 "num_base_bdevs": 4, 00:10:53.436 "num_base_bdevs_discovered": 2, 00:10:53.436 "num_base_bdevs_operational": 4, 00:10:53.436 "base_bdevs_list": [ 00:10:53.436 { 00:10:53.436 "name": "BaseBdev1", 00:10:53.436 "uuid": "7cac4acd-6cc8-4963-8869-b710227632f9", 00:10:53.436 "is_configured": true, 00:10:53.436 "data_offset": 0, 00:10:53.436 "data_size": 65536 00:10:53.436 }, 00:10:53.436 { 00:10:53.436 "name": "BaseBdev2", 00:10:53.436 "uuid": "03af0192-f3fb-4247-811d-e7ed81898e29", 00:10:53.436 "is_configured": true, 00:10:53.436 "data_offset": 0, 00:10:53.436 "data_size": 65536 00:10:53.436 }, 00:10:53.436 { 00:10:53.436 "name": "BaseBdev3", 00:10:53.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.436 "is_configured": false, 00:10:53.436 "data_offset": 0, 00:10:53.436 "data_size": 0 00:10:53.436 }, 00:10:53.436 { 00:10:53.436 "name": "BaseBdev4", 00:10:53.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.436 "is_configured": false, 00:10:53.436 "data_offset": 0, 00:10:53.436 "data_size": 0 00:10:53.436 } 00:10:53.436 ] 00:10:53.436 }' 00:10:53.436 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.436 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.696 08:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.696 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.696 08:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.696 [2024-12-13 08:22:06.012362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.696 BaseBdev3 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.696 [ 00:10:53.696 { 00:10:53.696 "name": "BaseBdev3", 00:10:53.696 "aliases": [ 00:10:53.696 "21cdab76-b884-4f80-a7f1-084e206c66d5" 00:10:53.696 ], 00:10:53.696 "product_name": "Malloc disk", 00:10:53.696 "block_size": 512, 00:10:53.696 "num_blocks": 65536, 00:10:53.696 "uuid": "21cdab76-b884-4f80-a7f1-084e206c66d5", 00:10:53.696 "assigned_rate_limits": { 00:10:53.696 "rw_ios_per_sec": 0, 00:10:53.696 "rw_mbytes_per_sec": 0, 00:10:53.696 "r_mbytes_per_sec": 0, 00:10:53.696 "w_mbytes_per_sec": 0 00:10:53.696 }, 00:10:53.696 "claimed": true, 00:10:53.696 "claim_type": "exclusive_write", 00:10:53.696 "zoned": false, 00:10:53.696 "supported_io_types": { 00:10:53.696 "read": true, 00:10:53.696 "write": true, 00:10:53.696 "unmap": true, 00:10:53.696 "flush": true, 00:10:53.696 "reset": true, 00:10:53.696 "nvme_admin": false, 00:10:53.696 "nvme_io": false, 00:10:53.696 "nvme_io_md": false, 00:10:53.696 "write_zeroes": true, 00:10:53.696 "zcopy": true, 00:10:53.696 "get_zone_info": false, 00:10:53.696 "zone_management": false, 00:10:53.696 "zone_append": false, 00:10:53.696 "compare": false, 00:10:53.696 "compare_and_write": false, 00:10:53.696 "abort": true, 00:10:53.696 "seek_hole": false, 00:10:53.696 "seek_data": false, 00:10:53.696 "copy": true, 00:10:53.696 "nvme_iov_md": false 00:10:53.696 }, 00:10:53.696 "memory_domains": [ 00:10:53.696 { 00:10:53.696 "dma_device_id": "system", 00:10:53.696 "dma_device_type": 1 00:10:53.696 }, 00:10:53.696 { 00:10:53.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.696 "dma_device_type": 2 00:10:53.696 } 00:10:53.696 ], 00:10:53.696 "driver_specific": {} 00:10:53.696 } 00:10:53.696 ] 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.696 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.956 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.956 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.956 "name": "Existed_Raid", 00:10:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.956 "strip_size_kb": 64, 00:10:53.956 "state": "configuring", 00:10:53.956 "raid_level": "raid0", 00:10:53.956 "superblock": false, 00:10:53.956 "num_base_bdevs": 4, 00:10:53.956 "num_base_bdevs_discovered": 3, 00:10:53.956 "num_base_bdevs_operational": 4, 00:10:53.956 "base_bdevs_list": [ 00:10:53.956 { 00:10:53.956 "name": "BaseBdev1", 00:10:53.956 "uuid": "7cac4acd-6cc8-4963-8869-b710227632f9", 00:10:53.956 "is_configured": true, 00:10:53.956 "data_offset": 0, 00:10:53.956 "data_size": 65536 00:10:53.956 }, 00:10:53.956 { 00:10:53.956 "name": "BaseBdev2", 00:10:53.956 "uuid": "03af0192-f3fb-4247-811d-e7ed81898e29", 00:10:53.956 "is_configured": true, 00:10:53.956 "data_offset": 0, 00:10:53.956 "data_size": 65536 00:10:53.956 }, 00:10:53.956 { 00:10:53.956 "name": "BaseBdev3", 00:10:53.956 "uuid": "21cdab76-b884-4f80-a7f1-084e206c66d5", 00:10:53.956 "is_configured": true, 00:10:53.956 "data_offset": 0, 00:10:53.956 "data_size": 65536 00:10:53.956 }, 00:10:53.956 { 00:10:53.956 "name": "BaseBdev4", 00:10:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.956 "is_configured": false, 00:10:53.956 "data_offset": 0, 00:10:53.956 "data_size": 0 00:10:53.956 } 00:10:53.956 ] 00:10:53.956 }' 00:10:53.956 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.956 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.215 [2024-12-13 08:22:06.555034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.215 [2024-12-13 08:22:06.555206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:54.215 [2024-12-13 08:22:06.555239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:54.215 [2024-12-13 08:22:06.555580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:54.215 [2024-12-13 08:22:06.555811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:54.215 BaseBdev4 00:10:54.215 [2024-12-13 08:22:06.555860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:54.215 [2024-12-13 08:22:06.556167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.215 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.474 [ 00:10:54.474 { 00:10:54.474 "name": "BaseBdev4", 00:10:54.474 "aliases": [ 00:10:54.474 "2e60692f-b581-41c2-abee-e884a71dc6ff" 00:10:54.474 ], 00:10:54.474 "product_name": "Malloc disk", 00:10:54.474 "block_size": 512, 00:10:54.474 "num_blocks": 65536, 00:10:54.474 "uuid": "2e60692f-b581-41c2-abee-e884a71dc6ff", 00:10:54.474 "assigned_rate_limits": { 00:10:54.474 "rw_ios_per_sec": 0, 00:10:54.474 "rw_mbytes_per_sec": 0, 00:10:54.474 "r_mbytes_per_sec": 0, 00:10:54.474 "w_mbytes_per_sec": 0 00:10:54.474 }, 00:10:54.474 "claimed": true, 00:10:54.474 "claim_type": "exclusive_write", 00:10:54.474 "zoned": false, 00:10:54.474 "supported_io_types": { 00:10:54.474 "read": true, 00:10:54.474 "write": true, 00:10:54.474 "unmap": true, 00:10:54.474 "flush": true, 00:10:54.474 "reset": true, 00:10:54.474 "nvme_admin": false, 00:10:54.474 "nvme_io": false, 00:10:54.474 "nvme_io_md": false, 00:10:54.474 "write_zeroes": true, 00:10:54.474 "zcopy": true, 00:10:54.474 "get_zone_info": false, 00:10:54.474 "zone_management": false, 00:10:54.474 "zone_append": false, 00:10:54.474 "compare": false, 00:10:54.474 "compare_and_write": false, 00:10:54.474 "abort": true, 00:10:54.474 "seek_hole": false, 00:10:54.474 "seek_data": false, 00:10:54.474 "copy": true, 00:10:54.474 "nvme_iov_md": false 00:10:54.474 }, 00:10:54.474 "memory_domains": [ 00:10:54.474 { 00:10:54.474 "dma_device_id": "system", 00:10:54.474 "dma_device_type": 1 00:10:54.474 }, 00:10:54.474 { 00:10:54.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.474 "dma_device_type": 2 00:10:54.474 } 00:10:54.474 ], 00:10:54.474 "driver_specific": {} 00:10:54.474 } 00:10:54.474 ] 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.474 "name": "Existed_Raid", 00:10:54.474 "uuid": "570af3a7-4bfe-4386-a24b-df0a965d24f1", 00:10:54.474 "strip_size_kb": 64, 00:10:54.474 "state": "online", 00:10:54.474 "raid_level": "raid0", 00:10:54.474 "superblock": false, 00:10:54.474 "num_base_bdevs": 4, 00:10:54.474 "num_base_bdevs_discovered": 4, 00:10:54.474 "num_base_bdevs_operational": 4, 00:10:54.474 "base_bdevs_list": [ 00:10:54.474 { 00:10:54.474 "name": "BaseBdev1", 00:10:54.474 "uuid": "7cac4acd-6cc8-4963-8869-b710227632f9", 00:10:54.474 "is_configured": true, 00:10:54.474 "data_offset": 0, 00:10:54.474 "data_size": 65536 00:10:54.474 }, 00:10:54.474 { 00:10:54.474 "name": "BaseBdev2", 00:10:54.474 "uuid": "03af0192-f3fb-4247-811d-e7ed81898e29", 00:10:54.474 "is_configured": true, 00:10:54.474 "data_offset": 0, 00:10:54.474 "data_size": 65536 00:10:54.474 }, 00:10:54.474 { 00:10:54.474 "name": "BaseBdev3", 00:10:54.474 "uuid": "21cdab76-b884-4f80-a7f1-084e206c66d5", 00:10:54.474 "is_configured": true, 00:10:54.474 "data_offset": 0, 00:10:54.474 "data_size": 65536 00:10:54.474 }, 00:10:54.474 { 00:10:54.474 "name": "BaseBdev4", 00:10:54.474 "uuid": "2e60692f-b581-41c2-abee-e884a71dc6ff", 00:10:54.474 "is_configured": true, 00:10:54.474 "data_offset": 0, 00:10:54.474 "data_size": 65536 00:10:54.474 } 00:10:54.474 ] 00:10:54.474 }' 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.474 08:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.733 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.733 [2024-12-13 08:22:07.086690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.991 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.991 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.991 "name": "Existed_Raid", 00:10:54.991 "aliases": [ 00:10:54.991 "570af3a7-4bfe-4386-a24b-df0a965d24f1" 00:10:54.991 ], 00:10:54.991 "product_name": "Raid Volume", 00:10:54.991 "block_size": 512, 00:10:54.991 "num_blocks": 262144, 00:10:54.991 "uuid": "570af3a7-4bfe-4386-a24b-df0a965d24f1", 00:10:54.991 "assigned_rate_limits": { 00:10:54.991 "rw_ios_per_sec": 0, 00:10:54.991 "rw_mbytes_per_sec": 0, 00:10:54.991 "r_mbytes_per_sec": 0, 00:10:54.991 "w_mbytes_per_sec": 0 00:10:54.991 }, 00:10:54.991 "claimed": false, 00:10:54.991 "zoned": false, 00:10:54.991 "supported_io_types": { 00:10:54.991 "read": true, 00:10:54.991 "write": true, 00:10:54.991 "unmap": true, 00:10:54.992 "flush": true, 00:10:54.992 "reset": true, 00:10:54.992 "nvme_admin": false, 00:10:54.992 "nvme_io": false, 00:10:54.992 "nvme_io_md": false, 00:10:54.992 "write_zeroes": true, 00:10:54.992 "zcopy": false, 00:10:54.992 "get_zone_info": false, 00:10:54.992 "zone_management": false, 00:10:54.992 "zone_append": false, 00:10:54.992 "compare": false, 00:10:54.992 "compare_and_write": false, 00:10:54.992 "abort": false, 00:10:54.992 "seek_hole": false, 00:10:54.992 "seek_data": false, 00:10:54.992 "copy": false, 00:10:54.992 "nvme_iov_md": false 00:10:54.992 }, 00:10:54.992 "memory_domains": [ 00:10:54.992 { 00:10:54.992 "dma_device_id": "system", 00:10:54.992 "dma_device_type": 1 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.992 "dma_device_type": 2 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "dma_device_id": "system", 00:10:54.992 "dma_device_type": 1 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.992 "dma_device_type": 2 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "dma_device_id": "system", 00:10:54.992 "dma_device_type": 1 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.992 "dma_device_type": 2 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "dma_device_id": "system", 00:10:54.992 "dma_device_type": 1 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.992 "dma_device_type": 2 00:10:54.992 } 00:10:54.992 ], 00:10:54.992 "driver_specific": { 00:10:54.992 "raid": { 00:10:54.992 "uuid": "570af3a7-4bfe-4386-a24b-df0a965d24f1", 00:10:54.992 "strip_size_kb": 64, 00:10:54.992 "state": "online", 00:10:54.992 "raid_level": "raid0", 00:10:54.992 "superblock": false, 00:10:54.992 "num_base_bdevs": 4, 00:10:54.992 "num_base_bdevs_discovered": 4, 00:10:54.992 "num_base_bdevs_operational": 4, 00:10:54.992 "base_bdevs_list": [ 00:10:54.992 { 00:10:54.992 "name": "BaseBdev1", 00:10:54.992 "uuid": "7cac4acd-6cc8-4963-8869-b710227632f9", 00:10:54.992 "is_configured": true, 00:10:54.992 "data_offset": 0, 00:10:54.992 "data_size": 65536 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "name": "BaseBdev2", 00:10:54.992 "uuid": "03af0192-f3fb-4247-811d-e7ed81898e29", 00:10:54.992 "is_configured": true, 00:10:54.992 "data_offset": 0, 00:10:54.992 "data_size": 65536 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "name": "BaseBdev3", 00:10:54.992 "uuid": "21cdab76-b884-4f80-a7f1-084e206c66d5", 00:10:54.992 "is_configured": true, 00:10:54.992 "data_offset": 0, 00:10:54.992 "data_size": 65536 00:10:54.992 }, 00:10:54.992 { 00:10:54.992 "name": "BaseBdev4", 00:10:54.992 "uuid": "2e60692f-b581-41c2-abee-e884a71dc6ff", 00:10:54.992 "is_configured": true, 00:10:54.992 "data_offset": 0, 00:10:54.992 "data_size": 65536 00:10:54.992 } 00:10:54.992 ] 00:10:54.992 } 00:10:54.992 } 00:10:54.992 }' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:54.992 BaseBdev2 00:10:54.992 BaseBdev3 00:10:54.992 BaseBdev4' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.992 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.251 [2024-12-13 08:22:07.417836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.251 [2024-12-13 08:22:07.417923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.251 [2024-12-13 08:22:07.418023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.251 "name": "Existed_Raid", 00:10:55.251 "uuid": "570af3a7-4bfe-4386-a24b-df0a965d24f1", 00:10:55.251 "strip_size_kb": 64, 00:10:55.251 "state": "offline", 00:10:55.251 "raid_level": "raid0", 00:10:55.251 "superblock": false, 00:10:55.251 "num_base_bdevs": 4, 00:10:55.251 "num_base_bdevs_discovered": 3, 00:10:55.251 "num_base_bdevs_operational": 3, 00:10:55.251 "base_bdevs_list": [ 00:10:55.251 { 00:10:55.251 "name": null, 00:10:55.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.251 "is_configured": false, 00:10:55.251 "data_offset": 0, 00:10:55.251 "data_size": 65536 00:10:55.251 }, 00:10:55.251 { 00:10:55.251 "name": "BaseBdev2", 00:10:55.251 "uuid": "03af0192-f3fb-4247-811d-e7ed81898e29", 00:10:55.251 "is_configured": true, 00:10:55.251 "data_offset": 0, 00:10:55.251 "data_size": 65536 00:10:55.251 }, 00:10:55.251 { 00:10:55.251 "name": "BaseBdev3", 00:10:55.251 "uuid": "21cdab76-b884-4f80-a7f1-084e206c66d5", 00:10:55.251 "is_configured": true, 00:10:55.251 "data_offset": 0, 00:10:55.251 "data_size": 65536 00:10:55.251 }, 00:10:55.251 { 00:10:55.251 "name": "BaseBdev4", 00:10:55.251 "uuid": "2e60692f-b581-41c2-abee-e884a71dc6ff", 00:10:55.251 "is_configured": true, 00:10:55.251 "data_offset": 0, 00:10:55.251 "data_size": 65536 00:10:55.251 } 00:10:55.251 ] 00:10:55.251 }' 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.251 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.819 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:55.820 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.820 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.820 08:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.820 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.820 08:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.820 [2024-12-13 08:22:08.036698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.820 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.079 [2024-12-13 08:22:08.210255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.079 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.079 [2024-12-13 08:22:08.374845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:56.079 [2024-12-13 08:22:08.374976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.341 BaseBdev2 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.341 [ 00:10:56.341 { 00:10:56.341 "name": "BaseBdev2", 00:10:56.341 "aliases": [ 00:10:56.341 "53e95828-0adf-4190-8d7c-a05a3885c3aa" 00:10:56.341 ], 00:10:56.341 "product_name": "Malloc disk", 00:10:56.341 "block_size": 512, 00:10:56.341 "num_blocks": 65536, 00:10:56.341 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:56.341 "assigned_rate_limits": { 00:10:56.341 "rw_ios_per_sec": 0, 00:10:56.341 "rw_mbytes_per_sec": 0, 00:10:56.341 "r_mbytes_per_sec": 0, 00:10:56.341 "w_mbytes_per_sec": 0 00:10:56.341 }, 00:10:56.341 "claimed": false, 00:10:56.341 "zoned": false, 00:10:56.341 "supported_io_types": { 00:10:56.341 "read": true, 00:10:56.341 "write": true, 00:10:56.341 "unmap": true, 00:10:56.341 "flush": true, 00:10:56.341 "reset": true, 00:10:56.341 "nvme_admin": false, 00:10:56.341 "nvme_io": false, 00:10:56.341 "nvme_io_md": false, 00:10:56.341 "write_zeroes": true, 00:10:56.341 "zcopy": true, 00:10:56.341 "get_zone_info": false, 00:10:56.341 "zone_management": false, 00:10:56.341 "zone_append": false, 00:10:56.341 "compare": false, 00:10:56.341 "compare_and_write": false, 00:10:56.341 "abort": true, 00:10:56.341 "seek_hole": false, 00:10:56.341 "seek_data": false, 00:10:56.341 "copy": true, 00:10:56.341 "nvme_iov_md": false 00:10:56.341 }, 00:10:56.341 "memory_domains": [ 00:10:56.341 { 00:10:56.341 "dma_device_id": "system", 00:10:56.341 "dma_device_type": 1 00:10:56.341 }, 00:10:56.341 { 00:10:56.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.341 "dma_device_type": 2 00:10:56.341 } 00:10:56.341 ], 00:10:56.341 "driver_specific": {} 00:10:56.341 } 00:10:56.341 ] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.341 BaseBdev3 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.341 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.342 [ 00:10:56.342 { 00:10:56.342 "name": "BaseBdev3", 00:10:56.342 "aliases": [ 00:10:56.342 "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2" 00:10:56.342 ], 00:10:56.342 "product_name": "Malloc disk", 00:10:56.342 "block_size": 512, 00:10:56.342 "num_blocks": 65536, 00:10:56.342 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:56.342 "assigned_rate_limits": { 00:10:56.342 "rw_ios_per_sec": 0, 00:10:56.342 "rw_mbytes_per_sec": 0, 00:10:56.342 "r_mbytes_per_sec": 0, 00:10:56.342 "w_mbytes_per_sec": 0 00:10:56.342 }, 00:10:56.342 "claimed": false, 00:10:56.342 "zoned": false, 00:10:56.342 "supported_io_types": { 00:10:56.342 "read": true, 00:10:56.342 "write": true, 00:10:56.342 "unmap": true, 00:10:56.342 "flush": true, 00:10:56.342 "reset": true, 00:10:56.342 "nvme_admin": false, 00:10:56.342 "nvme_io": false, 00:10:56.342 "nvme_io_md": false, 00:10:56.342 "write_zeroes": true, 00:10:56.342 "zcopy": true, 00:10:56.342 "get_zone_info": false, 00:10:56.342 "zone_management": false, 00:10:56.342 "zone_append": false, 00:10:56.342 "compare": false, 00:10:56.342 "compare_and_write": false, 00:10:56.342 "abort": true, 00:10:56.342 "seek_hole": false, 00:10:56.342 "seek_data": false, 00:10:56.342 "copy": true, 00:10:56.342 "nvme_iov_md": false 00:10:56.342 }, 00:10:56.342 "memory_domains": [ 00:10:56.342 { 00:10:56.342 "dma_device_id": "system", 00:10:56.342 "dma_device_type": 1 00:10:56.342 }, 00:10:56.342 { 00:10:56.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.342 "dma_device_type": 2 00:10:56.342 } 00:10:56.342 ], 00:10:56.342 "driver_specific": {} 00:10:56.342 } 00:10:56.342 ] 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.342 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.600 BaseBdev4 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.601 [ 00:10:56.601 { 00:10:56.601 "name": "BaseBdev4", 00:10:56.601 "aliases": [ 00:10:56.601 "2bf12450-75e2-4adb-bf3c-160482fe9405" 00:10:56.601 ], 00:10:56.601 "product_name": "Malloc disk", 00:10:56.601 "block_size": 512, 00:10:56.601 "num_blocks": 65536, 00:10:56.601 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:56.601 "assigned_rate_limits": { 00:10:56.601 "rw_ios_per_sec": 0, 00:10:56.601 "rw_mbytes_per_sec": 0, 00:10:56.601 "r_mbytes_per_sec": 0, 00:10:56.601 "w_mbytes_per_sec": 0 00:10:56.601 }, 00:10:56.601 "claimed": false, 00:10:56.601 "zoned": false, 00:10:56.601 "supported_io_types": { 00:10:56.601 "read": true, 00:10:56.601 "write": true, 00:10:56.601 "unmap": true, 00:10:56.601 "flush": true, 00:10:56.601 "reset": true, 00:10:56.601 "nvme_admin": false, 00:10:56.601 "nvme_io": false, 00:10:56.601 "nvme_io_md": false, 00:10:56.601 "write_zeroes": true, 00:10:56.601 "zcopy": true, 00:10:56.601 "get_zone_info": false, 00:10:56.601 "zone_management": false, 00:10:56.601 "zone_append": false, 00:10:56.601 "compare": false, 00:10:56.601 "compare_and_write": false, 00:10:56.601 "abort": true, 00:10:56.601 "seek_hole": false, 00:10:56.601 "seek_data": false, 00:10:56.601 "copy": true, 00:10:56.601 "nvme_iov_md": false 00:10:56.601 }, 00:10:56.601 "memory_domains": [ 00:10:56.601 { 00:10:56.601 "dma_device_id": "system", 00:10:56.601 "dma_device_type": 1 00:10:56.601 }, 00:10:56.601 { 00:10:56.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.601 "dma_device_type": 2 00:10:56.601 } 00:10:56.601 ], 00:10:56.601 "driver_specific": {} 00:10:56.601 } 00:10:56.601 ] 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.601 [2024-12-13 08:22:08.796709] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.601 [2024-12-13 08:22:08.796803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.601 [2024-12-13 08:22:08.796854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.601 [2024-12-13 08:22:08.798767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.601 [2024-12-13 08:22:08.798863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.601 "name": "Existed_Raid", 00:10:56.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.601 "strip_size_kb": 64, 00:10:56.601 "state": "configuring", 00:10:56.601 "raid_level": "raid0", 00:10:56.601 "superblock": false, 00:10:56.601 "num_base_bdevs": 4, 00:10:56.601 "num_base_bdevs_discovered": 3, 00:10:56.601 "num_base_bdevs_operational": 4, 00:10:56.601 "base_bdevs_list": [ 00:10:56.601 { 00:10:56.601 "name": "BaseBdev1", 00:10:56.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.601 "is_configured": false, 00:10:56.601 "data_offset": 0, 00:10:56.601 "data_size": 0 00:10:56.601 }, 00:10:56.601 { 00:10:56.601 "name": "BaseBdev2", 00:10:56.601 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:56.601 "is_configured": true, 00:10:56.601 "data_offset": 0, 00:10:56.601 "data_size": 65536 00:10:56.601 }, 00:10:56.601 { 00:10:56.601 "name": "BaseBdev3", 00:10:56.601 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:56.601 "is_configured": true, 00:10:56.601 "data_offset": 0, 00:10:56.601 "data_size": 65536 00:10:56.601 }, 00:10:56.601 { 00:10:56.601 "name": "BaseBdev4", 00:10:56.601 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:56.601 "is_configured": true, 00:10:56.601 "data_offset": 0, 00:10:56.601 "data_size": 65536 00:10:56.601 } 00:10:56.601 ] 00:10:56.601 }' 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.601 08:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 [2024-12-13 08:22:09.279921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.170 "name": "Existed_Raid", 00:10:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.170 "strip_size_kb": 64, 00:10:57.170 "state": "configuring", 00:10:57.170 "raid_level": "raid0", 00:10:57.170 "superblock": false, 00:10:57.170 "num_base_bdevs": 4, 00:10:57.170 "num_base_bdevs_discovered": 2, 00:10:57.170 "num_base_bdevs_operational": 4, 00:10:57.170 "base_bdevs_list": [ 00:10:57.170 { 00:10:57.170 "name": "BaseBdev1", 00:10:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.170 "is_configured": false, 00:10:57.170 "data_offset": 0, 00:10:57.170 "data_size": 0 00:10:57.170 }, 00:10:57.170 { 00:10:57.170 "name": null, 00:10:57.170 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:57.170 "is_configured": false, 00:10:57.170 "data_offset": 0, 00:10:57.170 "data_size": 65536 00:10:57.170 }, 00:10:57.170 { 00:10:57.170 "name": "BaseBdev3", 00:10:57.170 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:57.170 "is_configured": true, 00:10:57.170 "data_offset": 0, 00:10:57.170 "data_size": 65536 00:10:57.170 }, 00:10:57.170 { 00:10:57.170 "name": "BaseBdev4", 00:10:57.170 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:57.170 "is_configured": true, 00:10:57.170 "data_offset": 0, 00:10:57.170 "data_size": 65536 00:10:57.170 } 00:10:57.170 ] 00:10:57.170 }' 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.170 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.429 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.689 [2024-12-13 08:22:09.812881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.689 BaseBdev1 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.689 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.689 [ 00:10:57.689 { 00:10:57.689 "name": "BaseBdev1", 00:10:57.689 "aliases": [ 00:10:57.689 "81f19ff4-84a6-4116-b152-7dc3b9efb2d6" 00:10:57.689 ], 00:10:57.689 "product_name": "Malloc disk", 00:10:57.689 "block_size": 512, 00:10:57.690 "num_blocks": 65536, 00:10:57.690 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:10:57.690 "assigned_rate_limits": { 00:10:57.690 "rw_ios_per_sec": 0, 00:10:57.690 "rw_mbytes_per_sec": 0, 00:10:57.690 "r_mbytes_per_sec": 0, 00:10:57.690 "w_mbytes_per_sec": 0 00:10:57.690 }, 00:10:57.690 "claimed": true, 00:10:57.690 "claim_type": "exclusive_write", 00:10:57.690 "zoned": false, 00:10:57.690 "supported_io_types": { 00:10:57.690 "read": true, 00:10:57.690 "write": true, 00:10:57.690 "unmap": true, 00:10:57.690 "flush": true, 00:10:57.690 "reset": true, 00:10:57.690 "nvme_admin": false, 00:10:57.690 "nvme_io": false, 00:10:57.690 "nvme_io_md": false, 00:10:57.690 "write_zeroes": true, 00:10:57.690 "zcopy": true, 00:10:57.690 "get_zone_info": false, 00:10:57.690 "zone_management": false, 00:10:57.690 "zone_append": false, 00:10:57.690 "compare": false, 00:10:57.690 "compare_and_write": false, 00:10:57.690 "abort": true, 00:10:57.690 "seek_hole": false, 00:10:57.690 "seek_data": false, 00:10:57.690 "copy": true, 00:10:57.690 "nvme_iov_md": false 00:10:57.690 }, 00:10:57.690 "memory_domains": [ 00:10:57.690 { 00:10:57.690 "dma_device_id": "system", 00:10:57.690 "dma_device_type": 1 00:10:57.690 }, 00:10:57.690 { 00:10:57.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.690 "dma_device_type": 2 00:10:57.690 } 00:10:57.690 ], 00:10:57.690 "driver_specific": {} 00:10:57.690 } 00:10:57.690 ] 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.690 "name": "Existed_Raid", 00:10:57.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.690 "strip_size_kb": 64, 00:10:57.690 "state": "configuring", 00:10:57.690 "raid_level": "raid0", 00:10:57.690 "superblock": false, 00:10:57.690 "num_base_bdevs": 4, 00:10:57.690 "num_base_bdevs_discovered": 3, 00:10:57.690 "num_base_bdevs_operational": 4, 00:10:57.690 "base_bdevs_list": [ 00:10:57.690 { 00:10:57.690 "name": "BaseBdev1", 00:10:57.690 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:10:57.690 "is_configured": true, 00:10:57.690 "data_offset": 0, 00:10:57.690 "data_size": 65536 00:10:57.690 }, 00:10:57.690 { 00:10:57.690 "name": null, 00:10:57.690 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:57.690 "is_configured": false, 00:10:57.690 "data_offset": 0, 00:10:57.690 "data_size": 65536 00:10:57.690 }, 00:10:57.690 { 00:10:57.690 "name": "BaseBdev3", 00:10:57.690 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:57.690 "is_configured": true, 00:10:57.690 "data_offset": 0, 00:10:57.690 "data_size": 65536 00:10:57.690 }, 00:10:57.690 { 00:10:57.690 "name": "BaseBdev4", 00:10:57.690 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:57.690 "is_configured": true, 00:10:57.690 "data_offset": 0, 00:10:57.690 "data_size": 65536 00:10:57.690 } 00:10:57.690 ] 00:10:57.690 }' 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.690 08:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.258 [2024-12-13 08:22:10.380072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.258 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.258 "name": "Existed_Raid", 00:10:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.259 "strip_size_kb": 64, 00:10:58.259 "state": "configuring", 00:10:58.259 "raid_level": "raid0", 00:10:58.259 "superblock": false, 00:10:58.259 "num_base_bdevs": 4, 00:10:58.259 "num_base_bdevs_discovered": 2, 00:10:58.259 "num_base_bdevs_operational": 4, 00:10:58.259 "base_bdevs_list": [ 00:10:58.259 { 00:10:58.259 "name": "BaseBdev1", 00:10:58.259 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:10:58.259 "is_configured": true, 00:10:58.259 "data_offset": 0, 00:10:58.259 "data_size": 65536 00:10:58.259 }, 00:10:58.259 { 00:10:58.259 "name": null, 00:10:58.259 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:58.259 "is_configured": false, 00:10:58.259 "data_offset": 0, 00:10:58.259 "data_size": 65536 00:10:58.259 }, 00:10:58.259 { 00:10:58.259 "name": null, 00:10:58.259 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:58.259 "is_configured": false, 00:10:58.259 "data_offset": 0, 00:10:58.259 "data_size": 65536 00:10:58.259 }, 00:10:58.259 { 00:10:58.259 "name": "BaseBdev4", 00:10:58.259 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:58.259 "is_configured": true, 00:10:58.259 "data_offset": 0, 00:10:58.259 "data_size": 65536 00:10:58.259 } 00:10:58.259 ] 00:10:58.259 }' 00:10:58.259 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.259 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.518 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.518 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.518 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.518 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.518 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.776 [2024-12-13 08:22:10.891187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.776 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.777 "name": "Existed_Raid", 00:10:58.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.777 "strip_size_kb": 64, 00:10:58.777 "state": "configuring", 00:10:58.777 "raid_level": "raid0", 00:10:58.777 "superblock": false, 00:10:58.777 "num_base_bdevs": 4, 00:10:58.777 "num_base_bdevs_discovered": 3, 00:10:58.777 "num_base_bdevs_operational": 4, 00:10:58.777 "base_bdevs_list": [ 00:10:58.777 { 00:10:58.777 "name": "BaseBdev1", 00:10:58.777 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:10:58.777 "is_configured": true, 00:10:58.777 "data_offset": 0, 00:10:58.777 "data_size": 65536 00:10:58.777 }, 00:10:58.777 { 00:10:58.777 "name": null, 00:10:58.777 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:58.777 "is_configured": false, 00:10:58.777 "data_offset": 0, 00:10:58.777 "data_size": 65536 00:10:58.777 }, 00:10:58.777 { 00:10:58.777 "name": "BaseBdev3", 00:10:58.777 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:58.777 "is_configured": true, 00:10:58.777 "data_offset": 0, 00:10:58.777 "data_size": 65536 00:10:58.777 }, 00:10:58.777 { 00:10:58.777 "name": "BaseBdev4", 00:10:58.777 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:58.777 "is_configured": true, 00:10:58.777 "data_offset": 0, 00:10:58.777 "data_size": 65536 00:10:58.777 } 00:10:58.777 ] 00:10:58.777 }' 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.777 08:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.036 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.036 [2024-12-13 08:22:11.378494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.295 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.295 "name": "Existed_Raid", 00:10:59.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.295 "strip_size_kb": 64, 00:10:59.295 "state": "configuring", 00:10:59.296 "raid_level": "raid0", 00:10:59.296 "superblock": false, 00:10:59.296 "num_base_bdevs": 4, 00:10:59.296 "num_base_bdevs_discovered": 2, 00:10:59.296 "num_base_bdevs_operational": 4, 00:10:59.296 "base_bdevs_list": [ 00:10:59.296 { 00:10:59.296 "name": null, 00:10:59.296 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:10:59.296 "is_configured": false, 00:10:59.296 "data_offset": 0, 00:10:59.296 "data_size": 65536 00:10:59.296 }, 00:10:59.296 { 00:10:59.296 "name": null, 00:10:59.296 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:59.296 "is_configured": false, 00:10:59.296 "data_offset": 0, 00:10:59.296 "data_size": 65536 00:10:59.296 }, 00:10:59.296 { 00:10:59.296 "name": "BaseBdev3", 00:10:59.296 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:59.296 "is_configured": true, 00:10:59.296 "data_offset": 0, 00:10:59.296 "data_size": 65536 00:10:59.296 }, 00:10:59.296 { 00:10:59.296 "name": "BaseBdev4", 00:10:59.296 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:59.296 "is_configured": true, 00:10:59.296 "data_offset": 0, 00:10:59.296 "data_size": 65536 00:10:59.296 } 00:10:59.296 ] 00:10:59.296 }' 00:10:59.296 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.296 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.555 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.555 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.555 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.555 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.555 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.814 [2024-12-13 08:22:11.943952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.814 08:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.814 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.814 "name": "Existed_Raid", 00:10:59.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.814 "strip_size_kb": 64, 00:10:59.814 "state": "configuring", 00:10:59.814 "raid_level": "raid0", 00:10:59.814 "superblock": false, 00:10:59.814 "num_base_bdevs": 4, 00:10:59.814 "num_base_bdevs_discovered": 3, 00:10:59.815 "num_base_bdevs_operational": 4, 00:10:59.815 "base_bdevs_list": [ 00:10:59.815 { 00:10:59.815 "name": null, 00:10:59.815 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:10:59.815 "is_configured": false, 00:10:59.815 "data_offset": 0, 00:10:59.815 "data_size": 65536 00:10:59.815 }, 00:10:59.815 { 00:10:59.815 "name": "BaseBdev2", 00:10:59.815 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:10:59.815 "is_configured": true, 00:10:59.815 "data_offset": 0, 00:10:59.815 "data_size": 65536 00:10:59.815 }, 00:10:59.815 { 00:10:59.815 "name": "BaseBdev3", 00:10:59.815 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:10:59.815 "is_configured": true, 00:10:59.815 "data_offset": 0, 00:10:59.815 "data_size": 65536 00:10:59.815 }, 00:10:59.815 { 00:10:59.815 "name": "BaseBdev4", 00:10:59.815 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:10:59.815 "is_configured": true, 00:10:59.815 "data_offset": 0, 00:10:59.815 "data_size": 65536 00:10:59.815 } 00:10:59.815 ] 00:10:59.815 }' 00:10:59.815 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.815 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81f19ff4-84a6-4116-b152-7dc3b9efb2d6 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.382 [2024-12-13 08:22:12.586227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.382 [2024-12-13 08:22:12.586360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.382 [2024-12-13 08:22:12.586389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:00.382 [2024-12-13 08:22:12.586738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:00.382 [2024-12-13 08:22:12.586981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.382 [2024-12-13 08:22:12.587033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:11:00.382 id_bdev 0x617000008200 00:11:00.382 [2024-12-13 08:22:12.587375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.382 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.383 [ 00:11:00.383 { 00:11:00.383 "name": "NewBaseBdev", 00:11:00.383 "aliases": [ 00:11:00.383 "81f19ff4-84a6-4116-b152-7dc3b9efb2d6" 00:11:00.383 ], 00:11:00.383 "product_name": "Malloc disk", 00:11:00.383 "block_size": 512, 00:11:00.383 "num_blocks": 65536, 00:11:00.383 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:11:00.383 "assigned_rate_limits": { 00:11:00.383 "rw_ios_per_sec": 0, 00:11:00.383 "rw_mbytes_per_sec": 0, 00:11:00.383 "r_mbytes_per_sec": 0, 00:11:00.383 "w_mbytes_per_sec": 0 00:11:00.383 }, 00:11:00.383 "claimed": true, 00:11:00.383 "claim_type": "exclusive_write", 00:11:00.383 "zoned": false, 00:11:00.383 "supported_io_types": { 00:11:00.383 "read": true, 00:11:00.383 "write": true, 00:11:00.383 "unmap": true, 00:11:00.383 "flush": true, 00:11:00.383 "reset": true, 00:11:00.383 "nvme_admin": false, 00:11:00.383 "nvme_io": false, 00:11:00.383 "nvme_io_md": false, 00:11:00.383 "write_zeroes": true, 00:11:00.383 "zcopy": true, 00:11:00.383 "get_zone_info": false, 00:11:00.383 "zone_management": false, 00:11:00.383 "zone_append": false, 00:11:00.383 "compare": false, 00:11:00.383 "compare_and_write": false, 00:11:00.383 "abort": true, 00:11:00.383 "seek_hole": false, 00:11:00.383 "seek_data": false, 00:11:00.383 "copy": true, 00:11:00.383 "nvme_iov_md": false 00:11:00.383 }, 00:11:00.383 "memory_domains": [ 00:11:00.383 { 00:11:00.383 "dma_device_id": "system", 00:11:00.383 "dma_device_type": 1 00:11:00.383 }, 00:11:00.383 { 00:11:00.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.383 "dma_device_type": 2 00:11:00.383 } 00:11:00.383 ], 00:11:00.383 "driver_specific": {} 00:11:00.383 } 00:11:00.383 ] 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.383 "name": "Existed_Raid", 00:11:00.383 "uuid": "662e8733-1a61-469c-a8e3-dd04d79eb759", 00:11:00.383 "strip_size_kb": 64, 00:11:00.383 "state": "online", 00:11:00.383 "raid_level": "raid0", 00:11:00.383 "superblock": false, 00:11:00.383 "num_base_bdevs": 4, 00:11:00.383 "num_base_bdevs_discovered": 4, 00:11:00.383 "num_base_bdevs_operational": 4, 00:11:00.383 "base_bdevs_list": [ 00:11:00.383 { 00:11:00.383 "name": "NewBaseBdev", 00:11:00.383 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 0, 00:11:00.383 "data_size": 65536 00:11:00.383 }, 00:11:00.383 { 00:11:00.383 "name": "BaseBdev2", 00:11:00.383 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 0, 00:11:00.383 "data_size": 65536 00:11:00.383 }, 00:11:00.383 { 00:11:00.383 "name": "BaseBdev3", 00:11:00.383 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 0, 00:11:00.383 "data_size": 65536 00:11:00.383 }, 00:11:00.383 { 00:11:00.383 "name": "BaseBdev4", 00:11:00.383 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:11:00.383 "is_configured": true, 00:11:00.383 "data_offset": 0, 00:11:00.383 "data_size": 65536 00:11:00.383 } 00:11:00.383 ] 00:11:00.383 }' 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.383 08:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.950 [2024-12-13 08:22:13.117801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.950 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.950 "name": "Existed_Raid", 00:11:00.950 "aliases": [ 00:11:00.950 "662e8733-1a61-469c-a8e3-dd04d79eb759" 00:11:00.950 ], 00:11:00.950 "product_name": "Raid Volume", 00:11:00.950 "block_size": 512, 00:11:00.950 "num_blocks": 262144, 00:11:00.950 "uuid": "662e8733-1a61-469c-a8e3-dd04d79eb759", 00:11:00.950 "assigned_rate_limits": { 00:11:00.950 "rw_ios_per_sec": 0, 00:11:00.950 "rw_mbytes_per_sec": 0, 00:11:00.950 "r_mbytes_per_sec": 0, 00:11:00.950 "w_mbytes_per_sec": 0 00:11:00.950 }, 00:11:00.950 "claimed": false, 00:11:00.950 "zoned": false, 00:11:00.950 "supported_io_types": { 00:11:00.950 "read": true, 00:11:00.950 "write": true, 00:11:00.950 "unmap": true, 00:11:00.950 "flush": true, 00:11:00.950 "reset": true, 00:11:00.950 "nvme_admin": false, 00:11:00.950 "nvme_io": false, 00:11:00.950 "nvme_io_md": false, 00:11:00.950 "write_zeroes": true, 00:11:00.950 "zcopy": false, 00:11:00.950 "get_zone_info": false, 00:11:00.950 "zone_management": false, 00:11:00.950 "zone_append": false, 00:11:00.950 "compare": false, 00:11:00.950 "compare_and_write": false, 00:11:00.950 "abort": false, 00:11:00.950 "seek_hole": false, 00:11:00.950 "seek_data": false, 00:11:00.950 "copy": false, 00:11:00.950 "nvme_iov_md": false 00:11:00.950 }, 00:11:00.950 "memory_domains": [ 00:11:00.950 { 00:11:00.950 "dma_device_id": "system", 00:11:00.950 "dma_device_type": 1 00:11:00.950 }, 00:11:00.950 { 00:11:00.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.950 "dma_device_type": 2 00:11:00.950 }, 00:11:00.950 { 00:11:00.950 "dma_device_id": "system", 00:11:00.950 "dma_device_type": 1 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.951 "dma_device_type": 2 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "dma_device_id": "system", 00:11:00.951 "dma_device_type": 1 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.951 "dma_device_type": 2 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "dma_device_id": "system", 00:11:00.951 "dma_device_type": 1 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.951 "dma_device_type": 2 00:11:00.951 } 00:11:00.951 ], 00:11:00.951 "driver_specific": { 00:11:00.951 "raid": { 00:11:00.951 "uuid": "662e8733-1a61-469c-a8e3-dd04d79eb759", 00:11:00.951 "strip_size_kb": 64, 00:11:00.951 "state": "online", 00:11:00.951 "raid_level": "raid0", 00:11:00.951 "superblock": false, 00:11:00.951 "num_base_bdevs": 4, 00:11:00.951 "num_base_bdevs_discovered": 4, 00:11:00.951 "num_base_bdevs_operational": 4, 00:11:00.951 "base_bdevs_list": [ 00:11:00.951 { 00:11:00.951 "name": "NewBaseBdev", 00:11:00.951 "uuid": "81f19ff4-84a6-4116-b152-7dc3b9efb2d6", 00:11:00.951 "is_configured": true, 00:11:00.951 "data_offset": 0, 00:11:00.951 "data_size": 65536 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "name": "BaseBdev2", 00:11:00.951 "uuid": "53e95828-0adf-4190-8d7c-a05a3885c3aa", 00:11:00.951 "is_configured": true, 00:11:00.951 "data_offset": 0, 00:11:00.951 "data_size": 65536 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "name": "BaseBdev3", 00:11:00.951 "uuid": "d209bb24-2e7f-4203-ad6a-f66ae31fd9b2", 00:11:00.951 "is_configured": true, 00:11:00.951 "data_offset": 0, 00:11:00.951 "data_size": 65536 00:11:00.951 }, 00:11:00.951 { 00:11:00.951 "name": "BaseBdev4", 00:11:00.951 "uuid": "2bf12450-75e2-4adb-bf3c-160482fe9405", 00:11:00.951 "is_configured": true, 00:11:00.951 "data_offset": 0, 00:11:00.951 "data_size": 65536 00:11:00.951 } 00:11:00.951 ] 00:11:00.951 } 00:11:00.951 } 00:11:00.951 }' 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:00.951 BaseBdev2 00:11:00.951 BaseBdev3 00:11:00.951 BaseBdev4' 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.951 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 [2024-12-13 08:22:13.472896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.209 [2024-12-13 08:22:13.472985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.209 [2024-12-13 08:22:13.473153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.209 [2024-12-13 08:22:13.473275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.209 [2024-12-13 08:22:13.473331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69536 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69536 ']' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69536 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69536 00:11:01.209 killing process with pid 69536 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.209 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69536' 00:11:01.210 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69536 00:11:01.210 [2024-12-13 08:22:13.522767] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.210 08:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69536 00:11:01.778 [2024-12-13 08:22:13.976251] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.155 ************************************ 00:11:03.155 END TEST raid_state_function_test 00:11:03.155 ************************************ 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.155 00:11:03.155 real 0m12.315s 00:11:03.155 user 0m19.504s 00:11:03.155 sys 0m2.116s 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.155 08:22:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:03.155 08:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.155 08:22:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.155 08:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.155 ************************************ 00:11:03.155 START TEST raid_state_function_test_sb 00:11:03.155 ************************************ 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:03.155 Process raid pid: 70219 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70219 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70219' 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70219 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70219 ']' 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.155 08:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.156 08:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.156 [2024-12-13 08:22:15.415525] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:03.156 [2024-12-13 08:22:15.415751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.414 [2024-12-13 08:22:15.575614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.414 [2024-12-13 08:22:15.701152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.674 [2024-12-13 08:22:15.930931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.674 [2024-12-13 08:22:15.930987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.933 [2024-12-13 08:22:16.288380] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.933 [2024-12-13 08:22:16.288519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.933 [2024-12-13 08:22:16.288555] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.933 [2024-12-13 08:22:16.288583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.933 [2024-12-13 08:22:16.288605] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.933 [2024-12-13 08:22:16.288630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.933 [2024-12-13 08:22:16.288682] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.933 [2024-12-13 08:22:16.288706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.933 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.193 "name": "Existed_Raid", 00:11:04.193 "uuid": "211b6efe-0f6c-4730-ba65-546682138002", 00:11:04.193 "strip_size_kb": 64, 00:11:04.193 "state": "configuring", 00:11:04.193 "raid_level": "raid0", 00:11:04.193 "superblock": true, 00:11:04.193 "num_base_bdevs": 4, 00:11:04.193 "num_base_bdevs_discovered": 0, 00:11:04.193 "num_base_bdevs_operational": 4, 00:11:04.193 "base_bdevs_list": [ 00:11:04.193 { 00:11:04.193 "name": "BaseBdev1", 00:11:04.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.193 "is_configured": false, 00:11:04.193 "data_offset": 0, 00:11:04.193 "data_size": 0 00:11:04.193 }, 00:11:04.193 { 00:11:04.193 "name": "BaseBdev2", 00:11:04.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.193 "is_configured": false, 00:11:04.193 "data_offset": 0, 00:11:04.193 "data_size": 0 00:11:04.193 }, 00:11:04.193 { 00:11:04.193 "name": "BaseBdev3", 00:11:04.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.193 "is_configured": false, 00:11:04.193 "data_offset": 0, 00:11:04.193 "data_size": 0 00:11:04.193 }, 00:11:04.193 { 00:11:04.193 "name": "BaseBdev4", 00:11:04.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.193 "is_configured": false, 00:11:04.193 "data_offset": 0, 00:11:04.193 "data_size": 0 00:11:04.193 } 00:11:04.193 ] 00:11:04.193 }' 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.193 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.462 [2024-12-13 08:22:16.759606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.462 [2024-12-13 08:22:16.759709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.462 [2024-12-13 08:22:16.771573] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.462 [2024-12-13 08:22:16.771661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.462 [2024-12-13 08:22:16.771699] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.462 [2024-12-13 08:22:16.771727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.462 [2024-12-13 08:22:16.771765] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.462 [2024-12-13 08:22:16.771790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.462 [2024-12-13 08:22:16.771821] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.462 [2024-12-13 08:22:16.771855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.462 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.776 [2024-12-13 08:22:16.822850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.776 BaseBdev1 00:11:04.776 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.777 [ 00:11:04.777 { 00:11:04.777 "name": "BaseBdev1", 00:11:04.777 "aliases": [ 00:11:04.777 "d6d2a8d0-f077-4947-a6ca-1859a92ef813" 00:11:04.777 ], 00:11:04.777 "product_name": "Malloc disk", 00:11:04.777 "block_size": 512, 00:11:04.777 "num_blocks": 65536, 00:11:04.777 "uuid": "d6d2a8d0-f077-4947-a6ca-1859a92ef813", 00:11:04.777 "assigned_rate_limits": { 00:11:04.777 "rw_ios_per_sec": 0, 00:11:04.777 "rw_mbytes_per_sec": 0, 00:11:04.777 "r_mbytes_per_sec": 0, 00:11:04.777 "w_mbytes_per_sec": 0 00:11:04.777 }, 00:11:04.777 "claimed": true, 00:11:04.777 "claim_type": "exclusive_write", 00:11:04.777 "zoned": false, 00:11:04.777 "supported_io_types": { 00:11:04.777 "read": true, 00:11:04.777 "write": true, 00:11:04.777 "unmap": true, 00:11:04.777 "flush": true, 00:11:04.777 "reset": true, 00:11:04.777 "nvme_admin": false, 00:11:04.777 "nvme_io": false, 00:11:04.777 "nvme_io_md": false, 00:11:04.777 "write_zeroes": true, 00:11:04.777 "zcopy": true, 00:11:04.777 "get_zone_info": false, 00:11:04.777 "zone_management": false, 00:11:04.777 "zone_append": false, 00:11:04.777 "compare": false, 00:11:04.777 "compare_and_write": false, 00:11:04.777 "abort": true, 00:11:04.777 "seek_hole": false, 00:11:04.777 "seek_data": false, 00:11:04.777 "copy": true, 00:11:04.777 "nvme_iov_md": false 00:11:04.777 }, 00:11:04.777 "memory_domains": [ 00:11:04.777 { 00:11:04.777 "dma_device_id": "system", 00:11:04.777 "dma_device_type": 1 00:11:04.777 }, 00:11:04.777 { 00:11:04.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.777 "dma_device_type": 2 00:11:04.777 } 00:11:04.777 ], 00:11:04.777 "driver_specific": {} 00:11:04.777 } 00:11:04.777 ] 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.777 "name": "Existed_Raid", 00:11:04.777 "uuid": "1bb693e2-c819-4868-8609-9d7c94b63fda", 00:11:04.777 "strip_size_kb": 64, 00:11:04.777 "state": "configuring", 00:11:04.777 "raid_level": "raid0", 00:11:04.777 "superblock": true, 00:11:04.777 "num_base_bdevs": 4, 00:11:04.777 "num_base_bdevs_discovered": 1, 00:11:04.777 "num_base_bdevs_operational": 4, 00:11:04.777 "base_bdevs_list": [ 00:11:04.777 { 00:11:04.777 "name": "BaseBdev1", 00:11:04.777 "uuid": "d6d2a8d0-f077-4947-a6ca-1859a92ef813", 00:11:04.777 "is_configured": true, 00:11:04.777 "data_offset": 2048, 00:11:04.777 "data_size": 63488 00:11:04.777 }, 00:11:04.777 { 00:11:04.777 "name": "BaseBdev2", 00:11:04.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.777 "is_configured": false, 00:11:04.777 "data_offset": 0, 00:11:04.777 "data_size": 0 00:11:04.777 }, 00:11:04.777 { 00:11:04.777 "name": "BaseBdev3", 00:11:04.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.777 "is_configured": false, 00:11:04.777 "data_offset": 0, 00:11:04.777 "data_size": 0 00:11:04.777 }, 00:11:04.777 { 00:11:04.777 "name": "BaseBdev4", 00:11:04.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.777 "is_configured": false, 00:11:04.777 "data_offset": 0, 00:11:04.777 "data_size": 0 00:11:04.777 } 00:11:04.777 ] 00:11:04.777 }' 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.777 08:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.037 [2024-12-13 08:22:17.318101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.037 [2024-12-13 08:22:17.318255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.037 [2024-12-13 08:22:17.326185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.037 [2024-12-13 08:22:17.328284] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.037 [2024-12-13 08:22:17.328328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.037 [2024-12-13 08:22:17.328339] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.037 [2024-12-13 08:22:17.328351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.037 [2024-12-13 08:22:17.328360] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.037 [2024-12-13 08:22:17.328370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.037 "name": "Existed_Raid", 00:11:05.037 "uuid": "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e", 00:11:05.037 "strip_size_kb": 64, 00:11:05.037 "state": "configuring", 00:11:05.037 "raid_level": "raid0", 00:11:05.037 "superblock": true, 00:11:05.037 "num_base_bdevs": 4, 00:11:05.037 "num_base_bdevs_discovered": 1, 00:11:05.037 "num_base_bdevs_operational": 4, 00:11:05.037 "base_bdevs_list": [ 00:11:05.037 { 00:11:05.037 "name": "BaseBdev1", 00:11:05.037 "uuid": "d6d2a8d0-f077-4947-a6ca-1859a92ef813", 00:11:05.037 "is_configured": true, 00:11:05.037 "data_offset": 2048, 00:11:05.037 "data_size": 63488 00:11:05.037 }, 00:11:05.037 { 00:11:05.037 "name": "BaseBdev2", 00:11:05.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.037 "is_configured": false, 00:11:05.037 "data_offset": 0, 00:11:05.037 "data_size": 0 00:11:05.037 }, 00:11:05.037 { 00:11:05.037 "name": "BaseBdev3", 00:11:05.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.037 "is_configured": false, 00:11:05.037 "data_offset": 0, 00:11:05.037 "data_size": 0 00:11:05.037 }, 00:11:05.037 { 00:11:05.037 "name": "BaseBdev4", 00:11:05.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.037 "is_configured": false, 00:11:05.037 "data_offset": 0, 00:11:05.037 "data_size": 0 00:11:05.037 } 00:11:05.037 ] 00:11:05.037 }' 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.037 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.607 BaseBdev2 00:11:05.607 [2024-12-13 08:22:17.846559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.607 [ 00:11:05.607 { 00:11:05.607 "name": "BaseBdev2", 00:11:05.607 "aliases": [ 00:11:05.607 "00cbef47-5aa1-4254-9c99-ef95db4d51e2" 00:11:05.607 ], 00:11:05.607 "product_name": "Malloc disk", 00:11:05.607 "block_size": 512, 00:11:05.607 "num_blocks": 65536, 00:11:05.607 "uuid": "00cbef47-5aa1-4254-9c99-ef95db4d51e2", 00:11:05.607 "assigned_rate_limits": { 00:11:05.607 "rw_ios_per_sec": 0, 00:11:05.607 "rw_mbytes_per_sec": 0, 00:11:05.607 "r_mbytes_per_sec": 0, 00:11:05.607 "w_mbytes_per_sec": 0 00:11:05.607 }, 00:11:05.607 "claimed": true, 00:11:05.607 "claim_type": "exclusive_write", 00:11:05.607 "zoned": false, 00:11:05.607 "supported_io_types": { 00:11:05.607 "read": true, 00:11:05.607 "write": true, 00:11:05.607 "unmap": true, 00:11:05.607 "flush": true, 00:11:05.607 "reset": true, 00:11:05.607 "nvme_admin": false, 00:11:05.607 "nvme_io": false, 00:11:05.607 "nvme_io_md": false, 00:11:05.607 "write_zeroes": true, 00:11:05.607 "zcopy": true, 00:11:05.607 "get_zone_info": false, 00:11:05.607 "zone_management": false, 00:11:05.607 "zone_append": false, 00:11:05.607 "compare": false, 00:11:05.607 "compare_and_write": false, 00:11:05.607 "abort": true, 00:11:05.607 "seek_hole": false, 00:11:05.607 "seek_data": false, 00:11:05.607 "copy": true, 00:11:05.607 "nvme_iov_md": false 00:11:05.607 }, 00:11:05.607 "memory_domains": [ 00:11:05.607 { 00:11:05.607 "dma_device_id": "system", 00:11:05.607 "dma_device_type": 1 00:11:05.607 }, 00:11:05.607 { 00:11:05.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.607 "dma_device_type": 2 00:11:05.607 } 00:11:05.607 ], 00:11:05.607 "driver_specific": {} 00:11:05.607 } 00:11:05.607 ] 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.607 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.607 "name": "Existed_Raid", 00:11:05.607 "uuid": "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e", 00:11:05.607 "strip_size_kb": 64, 00:11:05.607 "state": "configuring", 00:11:05.607 "raid_level": "raid0", 00:11:05.607 "superblock": true, 00:11:05.607 "num_base_bdevs": 4, 00:11:05.607 "num_base_bdevs_discovered": 2, 00:11:05.607 "num_base_bdevs_operational": 4, 00:11:05.608 "base_bdevs_list": [ 00:11:05.608 { 00:11:05.608 "name": "BaseBdev1", 00:11:05.608 "uuid": "d6d2a8d0-f077-4947-a6ca-1859a92ef813", 00:11:05.608 "is_configured": true, 00:11:05.608 "data_offset": 2048, 00:11:05.608 "data_size": 63488 00:11:05.608 }, 00:11:05.608 { 00:11:05.608 "name": "BaseBdev2", 00:11:05.608 "uuid": "00cbef47-5aa1-4254-9c99-ef95db4d51e2", 00:11:05.608 "is_configured": true, 00:11:05.608 "data_offset": 2048, 00:11:05.608 "data_size": 63488 00:11:05.608 }, 00:11:05.608 { 00:11:05.608 "name": "BaseBdev3", 00:11:05.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.608 "is_configured": false, 00:11:05.608 "data_offset": 0, 00:11:05.608 "data_size": 0 00:11:05.608 }, 00:11:05.608 { 00:11:05.608 "name": "BaseBdev4", 00:11:05.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.608 "is_configured": false, 00:11:05.608 "data_offset": 0, 00:11:05.608 "data_size": 0 00:11:05.608 } 00:11:05.608 ] 00:11:05.608 }' 00:11:05.608 08:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.608 08:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.178 [2024-12-13 08:22:18.368391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.178 BaseBdev3 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.178 [ 00:11:06.178 { 00:11:06.178 "name": "BaseBdev3", 00:11:06.178 "aliases": [ 00:11:06.178 "bc2fa250-d706-4cc8-b17e-01bb38cae864" 00:11:06.178 ], 00:11:06.178 "product_name": "Malloc disk", 00:11:06.178 "block_size": 512, 00:11:06.178 "num_blocks": 65536, 00:11:06.178 "uuid": "bc2fa250-d706-4cc8-b17e-01bb38cae864", 00:11:06.178 "assigned_rate_limits": { 00:11:06.178 "rw_ios_per_sec": 0, 00:11:06.178 "rw_mbytes_per_sec": 0, 00:11:06.178 "r_mbytes_per_sec": 0, 00:11:06.178 "w_mbytes_per_sec": 0 00:11:06.178 }, 00:11:06.178 "claimed": true, 00:11:06.178 "claim_type": "exclusive_write", 00:11:06.178 "zoned": false, 00:11:06.178 "supported_io_types": { 00:11:06.178 "read": true, 00:11:06.178 "write": true, 00:11:06.178 "unmap": true, 00:11:06.178 "flush": true, 00:11:06.178 "reset": true, 00:11:06.178 "nvme_admin": false, 00:11:06.178 "nvme_io": false, 00:11:06.178 "nvme_io_md": false, 00:11:06.178 "write_zeroes": true, 00:11:06.178 "zcopy": true, 00:11:06.178 "get_zone_info": false, 00:11:06.178 "zone_management": false, 00:11:06.178 "zone_append": false, 00:11:06.178 "compare": false, 00:11:06.178 "compare_and_write": false, 00:11:06.178 "abort": true, 00:11:06.178 "seek_hole": false, 00:11:06.178 "seek_data": false, 00:11:06.178 "copy": true, 00:11:06.178 "nvme_iov_md": false 00:11:06.178 }, 00:11:06.178 "memory_domains": [ 00:11:06.178 { 00:11:06.178 "dma_device_id": "system", 00:11:06.178 "dma_device_type": 1 00:11:06.178 }, 00:11:06.178 { 00:11:06.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.178 "dma_device_type": 2 00:11:06.178 } 00:11:06.178 ], 00:11:06.178 "driver_specific": {} 00:11:06.178 } 00:11:06.178 ] 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.178 "name": "Existed_Raid", 00:11:06.178 "uuid": "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e", 00:11:06.178 "strip_size_kb": 64, 00:11:06.178 "state": "configuring", 00:11:06.178 "raid_level": "raid0", 00:11:06.178 "superblock": true, 00:11:06.178 "num_base_bdevs": 4, 00:11:06.178 "num_base_bdevs_discovered": 3, 00:11:06.178 "num_base_bdevs_operational": 4, 00:11:06.178 "base_bdevs_list": [ 00:11:06.178 { 00:11:06.178 "name": "BaseBdev1", 00:11:06.178 "uuid": "d6d2a8d0-f077-4947-a6ca-1859a92ef813", 00:11:06.178 "is_configured": true, 00:11:06.178 "data_offset": 2048, 00:11:06.178 "data_size": 63488 00:11:06.178 }, 00:11:06.178 { 00:11:06.178 "name": "BaseBdev2", 00:11:06.178 "uuid": "00cbef47-5aa1-4254-9c99-ef95db4d51e2", 00:11:06.178 "is_configured": true, 00:11:06.178 "data_offset": 2048, 00:11:06.178 "data_size": 63488 00:11:06.178 }, 00:11:06.178 { 00:11:06.178 "name": "BaseBdev3", 00:11:06.178 "uuid": "bc2fa250-d706-4cc8-b17e-01bb38cae864", 00:11:06.178 "is_configured": true, 00:11:06.178 "data_offset": 2048, 00:11:06.178 "data_size": 63488 00:11:06.178 }, 00:11:06.178 { 00:11:06.178 "name": "BaseBdev4", 00:11:06.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.178 "is_configured": false, 00:11:06.178 "data_offset": 0, 00:11:06.178 "data_size": 0 00:11:06.178 } 00:11:06.178 ] 00:11:06.178 }' 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.178 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.747 [2024-12-13 08:22:18.888599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.747 [2024-12-13 08:22:18.888872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.747 [2024-12-13 08:22:18.888888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.747 [2024-12-13 08:22:18.889191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.747 [2024-12-13 08:22:18.889376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.747 [2024-12-13 08:22:18.889390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:06.747 [2024-12-13 08:22:18.889550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.747 BaseBdev4 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.747 [ 00:11:06.747 { 00:11:06.747 "name": "BaseBdev4", 00:11:06.747 "aliases": [ 00:11:06.747 "9283586b-6704-431e-bdb2-0bc33288f83d" 00:11:06.747 ], 00:11:06.747 "product_name": "Malloc disk", 00:11:06.747 "block_size": 512, 00:11:06.747 "num_blocks": 65536, 00:11:06.747 "uuid": "9283586b-6704-431e-bdb2-0bc33288f83d", 00:11:06.747 "assigned_rate_limits": { 00:11:06.747 "rw_ios_per_sec": 0, 00:11:06.747 "rw_mbytes_per_sec": 0, 00:11:06.747 "r_mbytes_per_sec": 0, 00:11:06.747 "w_mbytes_per_sec": 0 00:11:06.747 }, 00:11:06.747 "claimed": true, 00:11:06.747 "claim_type": "exclusive_write", 00:11:06.747 "zoned": false, 00:11:06.747 "supported_io_types": { 00:11:06.747 "read": true, 00:11:06.747 "write": true, 00:11:06.747 "unmap": true, 00:11:06.747 "flush": true, 00:11:06.747 "reset": true, 00:11:06.747 "nvme_admin": false, 00:11:06.747 "nvme_io": false, 00:11:06.747 "nvme_io_md": false, 00:11:06.747 "write_zeroes": true, 00:11:06.747 "zcopy": true, 00:11:06.747 "get_zone_info": false, 00:11:06.747 "zone_management": false, 00:11:06.747 "zone_append": false, 00:11:06.747 "compare": false, 00:11:06.747 "compare_and_write": false, 00:11:06.747 "abort": true, 00:11:06.747 "seek_hole": false, 00:11:06.747 "seek_data": false, 00:11:06.747 "copy": true, 00:11:06.747 "nvme_iov_md": false 00:11:06.747 }, 00:11:06.747 "memory_domains": [ 00:11:06.747 { 00:11:06.747 "dma_device_id": "system", 00:11:06.747 "dma_device_type": 1 00:11:06.747 }, 00:11:06.747 { 00:11:06.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.747 "dma_device_type": 2 00:11:06.747 } 00:11:06.747 ], 00:11:06.747 "driver_specific": {} 00:11:06.747 } 00:11:06.747 ] 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:06.747 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.748 "name": "Existed_Raid", 00:11:06.748 "uuid": "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e", 00:11:06.748 "strip_size_kb": 64, 00:11:06.748 "state": "online", 00:11:06.748 "raid_level": "raid0", 00:11:06.748 "superblock": true, 00:11:06.748 "num_base_bdevs": 4, 00:11:06.748 "num_base_bdevs_discovered": 4, 00:11:06.748 "num_base_bdevs_operational": 4, 00:11:06.748 "base_bdevs_list": [ 00:11:06.748 { 00:11:06.748 "name": "BaseBdev1", 00:11:06.748 "uuid": "d6d2a8d0-f077-4947-a6ca-1859a92ef813", 00:11:06.748 "is_configured": true, 00:11:06.748 "data_offset": 2048, 00:11:06.748 "data_size": 63488 00:11:06.748 }, 00:11:06.748 { 00:11:06.748 "name": "BaseBdev2", 00:11:06.748 "uuid": "00cbef47-5aa1-4254-9c99-ef95db4d51e2", 00:11:06.748 "is_configured": true, 00:11:06.748 "data_offset": 2048, 00:11:06.748 "data_size": 63488 00:11:06.748 }, 00:11:06.748 { 00:11:06.748 "name": "BaseBdev3", 00:11:06.748 "uuid": "bc2fa250-d706-4cc8-b17e-01bb38cae864", 00:11:06.748 "is_configured": true, 00:11:06.748 "data_offset": 2048, 00:11:06.748 "data_size": 63488 00:11:06.748 }, 00:11:06.748 { 00:11:06.748 "name": "BaseBdev4", 00:11:06.748 "uuid": "9283586b-6704-431e-bdb2-0bc33288f83d", 00:11:06.748 "is_configured": true, 00:11:06.748 "data_offset": 2048, 00:11:06.748 "data_size": 63488 00:11:06.748 } 00:11:06.748 ] 00:11:06.748 }' 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.748 08:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.006 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.006 [2024-12-13 08:22:19.348285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.265 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.265 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.265 "name": "Existed_Raid", 00:11:07.265 "aliases": [ 00:11:07.265 "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e" 00:11:07.265 ], 00:11:07.265 "product_name": "Raid Volume", 00:11:07.265 "block_size": 512, 00:11:07.265 "num_blocks": 253952, 00:11:07.265 "uuid": "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e", 00:11:07.265 "assigned_rate_limits": { 00:11:07.265 "rw_ios_per_sec": 0, 00:11:07.265 "rw_mbytes_per_sec": 0, 00:11:07.265 "r_mbytes_per_sec": 0, 00:11:07.265 "w_mbytes_per_sec": 0 00:11:07.265 }, 00:11:07.265 "claimed": false, 00:11:07.265 "zoned": false, 00:11:07.265 "supported_io_types": { 00:11:07.265 "read": true, 00:11:07.265 "write": true, 00:11:07.265 "unmap": true, 00:11:07.265 "flush": true, 00:11:07.265 "reset": true, 00:11:07.265 "nvme_admin": false, 00:11:07.265 "nvme_io": false, 00:11:07.265 "nvme_io_md": false, 00:11:07.265 "write_zeroes": true, 00:11:07.265 "zcopy": false, 00:11:07.265 "get_zone_info": false, 00:11:07.265 "zone_management": false, 00:11:07.265 "zone_append": false, 00:11:07.265 "compare": false, 00:11:07.265 "compare_and_write": false, 00:11:07.265 "abort": false, 00:11:07.265 "seek_hole": false, 00:11:07.265 "seek_data": false, 00:11:07.265 "copy": false, 00:11:07.265 "nvme_iov_md": false 00:11:07.265 }, 00:11:07.265 "memory_domains": [ 00:11:07.265 { 00:11:07.265 "dma_device_id": "system", 00:11:07.265 "dma_device_type": 1 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.265 "dma_device_type": 2 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "dma_device_id": "system", 00:11:07.265 "dma_device_type": 1 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.265 "dma_device_type": 2 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "dma_device_id": "system", 00:11:07.265 "dma_device_type": 1 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.265 "dma_device_type": 2 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "dma_device_id": "system", 00:11:07.265 "dma_device_type": 1 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.265 "dma_device_type": 2 00:11:07.265 } 00:11:07.265 ], 00:11:07.265 "driver_specific": { 00:11:07.265 "raid": { 00:11:07.265 "uuid": "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e", 00:11:07.265 "strip_size_kb": 64, 00:11:07.265 "state": "online", 00:11:07.265 "raid_level": "raid0", 00:11:07.265 "superblock": true, 00:11:07.265 "num_base_bdevs": 4, 00:11:07.265 "num_base_bdevs_discovered": 4, 00:11:07.265 "num_base_bdevs_operational": 4, 00:11:07.265 "base_bdevs_list": [ 00:11:07.265 { 00:11:07.265 "name": "BaseBdev1", 00:11:07.265 "uuid": "d6d2a8d0-f077-4947-a6ca-1859a92ef813", 00:11:07.265 "is_configured": true, 00:11:07.265 "data_offset": 2048, 00:11:07.265 "data_size": 63488 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "name": "BaseBdev2", 00:11:07.265 "uuid": "00cbef47-5aa1-4254-9c99-ef95db4d51e2", 00:11:07.265 "is_configured": true, 00:11:07.265 "data_offset": 2048, 00:11:07.265 "data_size": 63488 00:11:07.265 }, 00:11:07.265 { 00:11:07.265 "name": "BaseBdev3", 00:11:07.265 "uuid": "bc2fa250-d706-4cc8-b17e-01bb38cae864", 00:11:07.265 "is_configured": true, 00:11:07.266 "data_offset": 2048, 00:11:07.266 "data_size": 63488 00:11:07.266 }, 00:11:07.266 { 00:11:07.266 "name": "BaseBdev4", 00:11:07.266 "uuid": "9283586b-6704-431e-bdb2-0bc33288f83d", 00:11:07.266 "is_configured": true, 00:11:07.266 "data_offset": 2048, 00:11:07.266 "data_size": 63488 00:11:07.266 } 00:11:07.266 ] 00:11:07.266 } 00:11:07.266 } 00:11:07.266 }' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.266 BaseBdev2 00:11:07.266 BaseBdev3 00:11:07.266 BaseBdev4' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.266 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.525 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.525 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.525 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.525 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.525 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.525 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.526 [2024-12-13 08:22:19.651466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.526 [2024-12-13 08:22:19.651503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.526 [2024-12-13 08:22:19.651558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.526 "name": "Existed_Raid", 00:11:07.526 "uuid": "e6a3ee6a-f68a-43d0-b438-2dd8d519e32e", 00:11:07.526 "strip_size_kb": 64, 00:11:07.526 "state": "offline", 00:11:07.526 "raid_level": "raid0", 00:11:07.526 "superblock": true, 00:11:07.526 "num_base_bdevs": 4, 00:11:07.526 "num_base_bdevs_discovered": 3, 00:11:07.526 "num_base_bdevs_operational": 3, 00:11:07.526 "base_bdevs_list": [ 00:11:07.526 { 00:11:07.526 "name": null, 00:11:07.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.526 "is_configured": false, 00:11:07.526 "data_offset": 0, 00:11:07.526 "data_size": 63488 00:11:07.526 }, 00:11:07.526 { 00:11:07.526 "name": "BaseBdev2", 00:11:07.526 "uuid": "00cbef47-5aa1-4254-9c99-ef95db4d51e2", 00:11:07.526 "is_configured": true, 00:11:07.526 "data_offset": 2048, 00:11:07.526 "data_size": 63488 00:11:07.526 }, 00:11:07.526 { 00:11:07.526 "name": "BaseBdev3", 00:11:07.526 "uuid": "bc2fa250-d706-4cc8-b17e-01bb38cae864", 00:11:07.526 "is_configured": true, 00:11:07.526 "data_offset": 2048, 00:11:07.526 "data_size": 63488 00:11:07.526 }, 00:11:07.526 { 00:11:07.526 "name": "BaseBdev4", 00:11:07.526 "uuid": "9283586b-6704-431e-bdb2-0bc33288f83d", 00:11:07.526 "is_configured": true, 00:11:07.526 "data_offset": 2048, 00:11:07.526 "data_size": 63488 00:11:07.526 } 00:11:07.526 ] 00:11:07.526 }' 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.526 08:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 [2024-12-13 08:22:20.258418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.094 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 [2024-12-13 08:22:20.426113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.354 [2024-12-13 08:22:20.595377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:08.354 [2024-12-13 08:22:20.595493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.354 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 BaseBdev2 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 [ 00:11:08.614 { 00:11:08.614 "name": "BaseBdev2", 00:11:08.614 "aliases": [ 00:11:08.614 "710e8965-1bce-4ecf-b5ad-0e09cc78d59a" 00:11:08.614 ], 00:11:08.614 "product_name": "Malloc disk", 00:11:08.614 "block_size": 512, 00:11:08.614 "num_blocks": 65536, 00:11:08.614 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:08.614 "assigned_rate_limits": { 00:11:08.614 "rw_ios_per_sec": 0, 00:11:08.614 "rw_mbytes_per_sec": 0, 00:11:08.614 "r_mbytes_per_sec": 0, 00:11:08.614 "w_mbytes_per_sec": 0 00:11:08.614 }, 00:11:08.614 "claimed": false, 00:11:08.614 "zoned": false, 00:11:08.614 "supported_io_types": { 00:11:08.614 "read": true, 00:11:08.614 "write": true, 00:11:08.614 "unmap": true, 00:11:08.614 "flush": true, 00:11:08.614 "reset": true, 00:11:08.614 "nvme_admin": false, 00:11:08.614 "nvme_io": false, 00:11:08.614 "nvme_io_md": false, 00:11:08.614 "write_zeroes": true, 00:11:08.614 "zcopy": true, 00:11:08.614 "get_zone_info": false, 00:11:08.614 "zone_management": false, 00:11:08.614 "zone_append": false, 00:11:08.614 "compare": false, 00:11:08.614 "compare_and_write": false, 00:11:08.614 "abort": true, 00:11:08.614 "seek_hole": false, 00:11:08.614 "seek_data": false, 00:11:08.614 "copy": true, 00:11:08.614 "nvme_iov_md": false 00:11:08.614 }, 00:11:08.614 "memory_domains": [ 00:11:08.614 { 00:11:08.614 "dma_device_id": "system", 00:11:08.614 "dma_device_type": 1 00:11:08.614 }, 00:11:08.614 { 00:11:08.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.614 "dma_device_type": 2 00:11:08.614 } 00:11:08.614 ], 00:11:08.614 "driver_specific": {} 00:11:08.614 } 00:11:08.614 ] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 BaseBdev3 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.614 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.615 [ 00:11:08.615 { 00:11:08.615 "name": "BaseBdev3", 00:11:08.615 "aliases": [ 00:11:08.615 "8afc44f4-3012-425a-92d8-ead0c7515c5c" 00:11:08.615 ], 00:11:08.615 "product_name": "Malloc disk", 00:11:08.615 "block_size": 512, 00:11:08.615 "num_blocks": 65536, 00:11:08.615 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:08.615 "assigned_rate_limits": { 00:11:08.615 "rw_ios_per_sec": 0, 00:11:08.615 "rw_mbytes_per_sec": 0, 00:11:08.615 "r_mbytes_per_sec": 0, 00:11:08.615 "w_mbytes_per_sec": 0 00:11:08.615 }, 00:11:08.615 "claimed": false, 00:11:08.615 "zoned": false, 00:11:08.615 "supported_io_types": { 00:11:08.615 "read": true, 00:11:08.615 "write": true, 00:11:08.615 "unmap": true, 00:11:08.615 "flush": true, 00:11:08.615 "reset": true, 00:11:08.615 "nvme_admin": false, 00:11:08.615 "nvme_io": false, 00:11:08.615 "nvme_io_md": false, 00:11:08.615 "write_zeroes": true, 00:11:08.615 "zcopy": true, 00:11:08.615 "get_zone_info": false, 00:11:08.615 "zone_management": false, 00:11:08.615 "zone_append": false, 00:11:08.615 "compare": false, 00:11:08.615 "compare_and_write": false, 00:11:08.615 "abort": true, 00:11:08.615 "seek_hole": false, 00:11:08.615 "seek_data": false, 00:11:08.615 "copy": true, 00:11:08.615 "nvme_iov_md": false 00:11:08.615 }, 00:11:08.615 "memory_domains": [ 00:11:08.615 { 00:11:08.615 "dma_device_id": "system", 00:11:08.615 "dma_device_type": 1 00:11:08.615 }, 00:11:08.615 { 00:11:08.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.615 "dma_device_type": 2 00:11:08.615 } 00:11:08.615 ], 00:11:08.615 "driver_specific": {} 00:11:08.615 } 00:11:08.615 ] 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.615 BaseBdev4 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.615 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.877 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.877 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:08.877 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.877 08:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.877 [ 00:11:08.877 { 00:11:08.877 "name": "BaseBdev4", 00:11:08.877 "aliases": [ 00:11:08.877 "228237a0-6557-4408-a10e-605469f61c60" 00:11:08.877 ], 00:11:08.877 "product_name": "Malloc disk", 00:11:08.877 "block_size": 512, 00:11:08.877 "num_blocks": 65536, 00:11:08.877 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:08.877 "assigned_rate_limits": { 00:11:08.877 "rw_ios_per_sec": 0, 00:11:08.877 "rw_mbytes_per_sec": 0, 00:11:08.877 "r_mbytes_per_sec": 0, 00:11:08.877 "w_mbytes_per_sec": 0 00:11:08.877 }, 00:11:08.877 "claimed": false, 00:11:08.877 "zoned": false, 00:11:08.877 "supported_io_types": { 00:11:08.877 "read": true, 00:11:08.877 "write": true, 00:11:08.877 "unmap": true, 00:11:08.877 "flush": true, 00:11:08.877 "reset": true, 00:11:08.877 "nvme_admin": false, 00:11:08.877 "nvme_io": false, 00:11:08.877 "nvme_io_md": false, 00:11:08.877 "write_zeroes": true, 00:11:08.877 "zcopy": true, 00:11:08.877 "get_zone_info": false, 00:11:08.877 "zone_management": false, 00:11:08.877 "zone_append": false, 00:11:08.877 "compare": false, 00:11:08.877 "compare_and_write": false, 00:11:08.877 "abort": true, 00:11:08.877 "seek_hole": false, 00:11:08.877 "seek_data": false, 00:11:08.877 "copy": true, 00:11:08.877 "nvme_iov_md": false 00:11:08.877 }, 00:11:08.877 "memory_domains": [ 00:11:08.877 { 00:11:08.877 "dma_device_id": "system", 00:11:08.877 "dma_device_type": 1 00:11:08.877 }, 00:11:08.877 { 00:11:08.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.877 "dma_device_type": 2 00:11:08.877 } 00:11:08.877 ], 00:11:08.877 "driver_specific": {} 00:11:08.877 } 00:11:08.877 ] 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.877 [2024-12-13 08:22:21.019063] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.877 [2024-12-13 08:22:21.019240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.877 [2024-12-13 08:22:21.019305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.877 [2024-12-13 08:22:21.021501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.877 [2024-12-13 08:22:21.021624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.877 "name": "Existed_Raid", 00:11:08.877 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:08.877 "strip_size_kb": 64, 00:11:08.877 "state": "configuring", 00:11:08.877 "raid_level": "raid0", 00:11:08.877 "superblock": true, 00:11:08.877 "num_base_bdevs": 4, 00:11:08.877 "num_base_bdevs_discovered": 3, 00:11:08.877 "num_base_bdevs_operational": 4, 00:11:08.877 "base_bdevs_list": [ 00:11:08.877 { 00:11:08.877 "name": "BaseBdev1", 00:11:08.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.877 "is_configured": false, 00:11:08.877 "data_offset": 0, 00:11:08.877 "data_size": 0 00:11:08.877 }, 00:11:08.877 { 00:11:08.877 "name": "BaseBdev2", 00:11:08.877 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:08.877 "is_configured": true, 00:11:08.877 "data_offset": 2048, 00:11:08.877 "data_size": 63488 00:11:08.877 }, 00:11:08.877 { 00:11:08.877 "name": "BaseBdev3", 00:11:08.877 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:08.877 "is_configured": true, 00:11:08.877 "data_offset": 2048, 00:11:08.877 "data_size": 63488 00:11:08.877 }, 00:11:08.877 { 00:11:08.877 "name": "BaseBdev4", 00:11:08.877 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:08.877 "is_configured": true, 00:11:08.877 "data_offset": 2048, 00:11:08.877 "data_size": 63488 00:11:08.877 } 00:11:08.877 ] 00:11:08.877 }' 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.877 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.136 [2024-12-13 08:22:21.474263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.136 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.402 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.402 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.402 "name": "Existed_Raid", 00:11:09.402 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:09.402 "strip_size_kb": 64, 00:11:09.402 "state": "configuring", 00:11:09.402 "raid_level": "raid0", 00:11:09.402 "superblock": true, 00:11:09.402 "num_base_bdevs": 4, 00:11:09.402 "num_base_bdevs_discovered": 2, 00:11:09.402 "num_base_bdevs_operational": 4, 00:11:09.402 "base_bdevs_list": [ 00:11:09.402 { 00:11:09.402 "name": "BaseBdev1", 00:11:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.402 "is_configured": false, 00:11:09.402 "data_offset": 0, 00:11:09.402 "data_size": 0 00:11:09.402 }, 00:11:09.402 { 00:11:09.402 "name": null, 00:11:09.402 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:09.402 "is_configured": false, 00:11:09.402 "data_offset": 0, 00:11:09.402 "data_size": 63488 00:11:09.402 }, 00:11:09.402 { 00:11:09.402 "name": "BaseBdev3", 00:11:09.402 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:09.402 "is_configured": true, 00:11:09.402 "data_offset": 2048, 00:11:09.402 "data_size": 63488 00:11:09.402 }, 00:11:09.402 { 00:11:09.402 "name": "BaseBdev4", 00:11:09.402 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:09.402 "is_configured": true, 00:11:09.402 "data_offset": 2048, 00:11:09.402 "data_size": 63488 00:11:09.402 } 00:11:09.402 ] 00:11:09.402 }' 00:11:09.402 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.402 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.708 [2024-12-13 08:22:21.978545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.708 BaseBdev1 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.708 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.709 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.709 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.709 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.709 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.709 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.709 08:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.709 [ 00:11:09.709 { 00:11:09.709 "name": "BaseBdev1", 00:11:09.709 "aliases": [ 00:11:09.709 "ec42e977-0500-4e6c-b643-76e525985100" 00:11:09.709 ], 00:11:09.709 "product_name": "Malloc disk", 00:11:09.709 "block_size": 512, 00:11:09.709 "num_blocks": 65536, 00:11:09.709 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:09.709 "assigned_rate_limits": { 00:11:09.709 "rw_ios_per_sec": 0, 00:11:09.709 "rw_mbytes_per_sec": 0, 00:11:09.709 "r_mbytes_per_sec": 0, 00:11:09.709 "w_mbytes_per_sec": 0 00:11:09.709 }, 00:11:09.709 "claimed": true, 00:11:09.709 "claim_type": "exclusive_write", 00:11:09.709 "zoned": false, 00:11:09.709 "supported_io_types": { 00:11:09.709 "read": true, 00:11:09.709 "write": true, 00:11:09.709 "unmap": true, 00:11:09.709 "flush": true, 00:11:09.709 "reset": true, 00:11:09.709 "nvme_admin": false, 00:11:09.709 "nvme_io": false, 00:11:09.709 "nvme_io_md": false, 00:11:09.709 "write_zeroes": true, 00:11:09.709 "zcopy": true, 00:11:09.709 "get_zone_info": false, 00:11:09.709 "zone_management": false, 00:11:09.709 "zone_append": false, 00:11:09.709 "compare": false, 00:11:09.709 "compare_and_write": false, 00:11:09.709 "abort": true, 00:11:09.709 "seek_hole": false, 00:11:09.709 "seek_data": false, 00:11:09.709 "copy": true, 00:11:09.709 "nvme_iov_md": false 00:11:09.709 }, 00:11:09.709 "memory_domains": [ 00:11:09.709 { 00:11:09.709 "dma_device_id": "system", 00:11:09.709 "dma_device_type": 1 00:11:09.709 }, 00:11:09.709 { 00:11:09.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.709 "dma_device_type": 2 00:11:09.709 } 00:11:09.709 ], 00:11:09.709 "driver_specific": {} 00:11:09.709 } 00:11:09.709 ] 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.709 "name": "Existed_Raid", 00:11:09.709 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:09.709 "strip_size_kb": 64, 00:11:09.709 "state": "configuring", 00:11:09.709 "raid_level": "raid0", 00:11:09.709 "superblock": true, 00:11:09.709 "num_base_bdevs": 4, 00:11:09.709 "num_base_bdevs_discovered": 3, 00:11:09.709 "num_base_bdevs_operational": 4, 00:11:09.709 "base_bdevs_list": [ 00:11:09.709 { 00:11:09.709 "name": "BaseBdev1", 00:11:09.709 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:09.709 "is_configured": true, 00:11:09.709 "data_offset": 2048, 00:11:09.709 "data_size": 63488 00:11:09.709 }, 00:11:09.709 { 00:11:09.709 "name": null, 00:11:09.709 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:09.709 "is_configured": false, 00:11:09.709 "data_offset": 0, 00:11:09.709 "data_size": 63488 00:11:09.709 }, 00:11:09.709 { 00:11:09.709 "name": "BaseBdev3", 00:11:09.709 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:09.709 "is_configured": true, 00:11:09.709 "data_offset": 2048, 00:11:09.709 "data_size": 63488 00:11:09.709 }, 00:11:09.709 { 00:11:09.709 "name": "BaseBdev4", 00:11:09.709 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:09.709 "is_configured": true, 00:11:09.709 "data_offset": 2048, 00:11:09.709 "data_size": 63488 00:11:09.709 } 00:11:09.709 ] 00:11:09.709 }' 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.709 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 [2024-12-13 08:22:22.553702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.278 "name": "Existed_Raid", 00:11:10.278 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:10.278 "strip_size_kb": 64, 00:11:10.278 "state": "configuring", 00:11:10.278 "raid_level": "raid0", 00:11:10.278 "superblock": true, 00:11:10.278 "num_base_bdevs": 4, 00:11:10.278 "num_base_bdevs_discovered": 2, 00:11:10.278 "num_base_bdevs_operational": 4, 00:11:10.278 "base_bdevs_list": [ 00:11:10.278 { 00:11:10.278 "name": "BaseBdev1", 00:11:10.278 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:10.278 "is_configured": true, 00:11:10.278 "data_offset": 2048, 00:11:10.278 "data_size": 63488 00:11:10.278 }, 00:11:10.278 { 00:11:10.278 "name": null, 00:11:10.278 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:10.278 "is_configured": false, 00:11:10.278 "data_offset": 0, 00:11:10.278 "data_size": 63488 00:11:10.278 }, 00:11:10.278 { 00:11:10.278 "name": null, 00:11:10.278 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:10.278 "is_configured": false, 00:11:10.278 "data_offset": 0, 00:11:10.278 "data_size": 63488 00:11:10.278 }, 00:11:10.278 { 00:11:10.278 "name": "BaseBdev4", 00:11:10.278 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:10.278 "is_configured": true, 00:11:10.278 "data_offset": 2048, 00:11:10.278 "data_size": 63488 00:11:10.278 } 00:11:10.278 ] 00:11:10.278 }' 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.278 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.845 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 08:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.845 08:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 [2024-12-13 08:22:23.016912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.845 "name": "Existed_Raid", 00:11:10.845 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:10.845 "strip_size_kb": 64, 00:11:10.845 "state": "configuring", 00:11:10.845 "raid_level": "raid0", 00:11:10.845 "superblock": true, 00:11:10.845 "num_base_bdevs": 4, 00:11:10.845 "num_base_bdevs_discovered": 3, 00:11:10.845 "num_base_bdevs_operational": 4, 00:11:10.845 "base_bdevs_list": [ 00:11:10.845 { 00:11:10.845 "name": "BaseBdev1", 00:11:10.845 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:10.845 "is_configured": true, 00:11:10.845 "data_offset": 2048, 00:11:10.845 "data_size": 63488 00:11:10.845 }, 00:11:10.845 { 00:11:10.845 "name": null, 00:11:10.845 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:10.845 "is_configured": false, 00:11:10.845 "data_offset": 0, 00:11:10.845 "data_size": 63488 00:11:10.845 }, 00:11:10.845 { 00:11:10.845 "name": "BaseBdev3", 00:11:10.845 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:10.845 "is_configured": true, 00:11:10.845 "data_offset": 2048, 00:11:10.845 "data_size": 63488 00:11:10.845 }, 00:11:10.845 { 00:11:10.845 "name": "BaseBdev4", 00:11:10.845 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:10.845 "is_configured": true, 00:11:10.845 "data_offset": 2048, 00:11:10.845 "data_size": 63488 00:11:10.845 } 00:11:10.845 ] 00:11:10.845 }' 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.845 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.413 [2024-12-13 08:22:23.568044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.413 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.413 "name": "Existed_Raid", 00:11:11.413 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:11.413 "strip_size_kb": 64, 00:11:11.413 "state": "configuring", 00:11:11.413 "raid_level": "raid0", 00:11:11.413 "superblock": true, 00:11:11.413 "num_base_bdevs": 4, 00:11:11.413 "num_base_bdevs_discovered": 2, 00:11:11.413 "num_base_bdevs_operational": 4, 00:11:11.413 "base_bdevs_list": [ 00:11:11.413 { 00:11:11.413 "name": null, 00:11:11.413 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:11.413 "is_configured": false, 00:11:11.413 "data_offset": 0, 00:11:11.413 "data_size": 63488 00:11:11.413 }, 00:11:11.413 { 00:11:11.413 "name": null, 00:11:11.413 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:11.413 "is_configured": false, 00:11:11.413 "data_offset": 0, 00:11:11.413 "data_size": 63488 00:11:11.413 }, 00:11:11.413 { 00:11:11.414 "name": "BaseBdev3", 00:11:11.414 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:11.414 "is_configured": true, 00:11:11.414 "data_offset": 2048, 00:11:11.414 "data_size": 63488 00:11:11.414 }, 00:11:11.414 { 00:11:11.414 "name": "BaseBdev4", 00:11:11.414 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:11.414 "is_configured": true, 00:11:11.414 "data_offset": 2048, 00:11:11.414 "data_size": 63488 00:11:11.414 } 00:11:11.414 ] 00:11:11.414 }' 00:11:11.414 08:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.414 08:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.982 [2024-12-13 08:22:24.192596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.982 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.982 "name": "Existed_Raid", 00:11:11.982 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:11.982 "strip_size_kb": 64, 00:11:11.982 "state": "configuring", 00:11:11.982 "raid_level": "raid0", 00:11:11.982 "superblock": true, 00:11:11.982 "num_base_bdevs": 4, 00:11:11.982 "num_base_bdevs_discovered": 3, 00:11:11.982 "num_base_bdevs_operational": 4, 00:11:11.982 "base_bdevs_list": [ 00:11:11.983 { 00:11:11.983 "name": null, 00:11:11.983 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:11.983 "is_configured": false, 00:11:11.983 "data_offset": 0, 00:11:11.983 "data_size": 63488 00:11:11.983 }, 00:11:11.983 { 00:11:11.983 "name": "BaseBdev2", 00:11:11.983 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:11.983 "is_configured": true, 00:11:11.983 "data_offset": 2048, 00:11:11.983 "data_size": 63488 00:11:11.983 }, 00:11:11.983 { 00:11:11.983 "name": "BaseBdev3", 00:11:11.983 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:11.983 "is_configured": true, 00:11:11.983 "data_offset": 2048, 00:11:11.983 "data_size": 63488 00:11:11.983 }, 00:11:11.983 { 00:11:11.983 "name": "BaseBdev4", 00:11:11.983 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:11.983 "is_configured": true, 00:11:11.983 "data_offset": 2048, 00:11:11.983 "data_size": 63488 00:11:11.983 } 00:11:11.983 ] 00:11:11.983 }' 00:11:11.983 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.983 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ec42e977-0500-4e6c-b643-76e525985100 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.552 [2024-12-13 08:22:24.814962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:12.552 [2024-12-13 08:22:24.815297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.552 [2024-12-13 08:22:24.815314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.552 [2024-12-13 08:22:24.815622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.552 [2024-12-13 08:22:24.815792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.552 [2024-12-13 08:22:24.815805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:12.552 NewBaseBdev 00:11:12.552 [2024-12-13 08:22:24.815961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.552 [ 00:11:12.552 { 00:11:12.552 "name": "NewBaseBdev", 00:11:12.552 "aliases": [ 00:11:12.552 "ec42e977-0500-4e6c-b643-76e525985100" 00:11:12.552 ], 00:11:12.552 "product_name": "Malloc disk", 00:11:12.552 "block_size": 512, 00:11:12.552 "num_blocks": 65536, 00:11:12.552 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:12.552 "assigned_rate_limits": { 00:11:12.552 "rw_ios_per_sec": 0, 00:11:12.552 "rw_mbytes_per_sec": 0, 00:11:12.552 "r_mbytes_per_sec": 0, 00:11:12.552 "w_mbytes_per_sec": 0 00:11:12.552 }, 00:11:12.552 "claimed": true, 00:11:12.552 "claim_type": "exclusive_write", 00:11:12.552 "zoned": false, 00:11:12.552 "supported_io_types": { 00:11:12.552 "read": true, 00:11:12.552 "write": true, 00:11:12.552 "unmap": true, 00:11:12.552 "flush": true, 00:11:12.552 "reset": true, 00:11:12.552 "nvme_admin": false, 00:11:12.552 "nvme_io": false, 00:11:12.552 "nvme_io_md": false, 00:11:12.552 "write_zeroes": true, 00:11:12.552 "zcopy": true, 00:11:12.552 "get_zone_info": false, 00:11:12.552 "zone_management": false, 00:11:12.552 "zone_append": false, 00:11:12.552 "compare": false, 00:11:12.552 "compare_and_write": false, 00:11:12.552 "abort": true, 00:11:12.552 "seek_hole": false, 00:11:12.552 "seek_data": false, 00:11:12.552 "copy": true, 00:11:12.552 "nvme_iov_md": false 00:11:12.552 }, 00:11:12.552 "memory_domains": [ 00:11:12.552 { 00:11:12.552 "dma_device_id": "system", 00:11:12.552 "dma_device_type": 1 00:11:12.552 }, 00:11:12.552 { 00:11:12.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.552 "dma_device_type": 2 00:11:12.552 } 00:11:12.552 ], 00:11:12.552 "driver_specific": {} 00:11:12.552 } 00:11:12.552 ] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.552 "name": "Existed_Raid", 00:11:12.552 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:12.552 "strip_size_kb": 64, 00:11:12.552 "state": "online", 00:11:12.552 "raid_level": "raid0", 00:11:12.552 "superblock": true, 00:11:12.552 "num_base_bdevs": 4, 00:11:12.552 "num_base_bdevs_discovered": 4, 00:11:12.552 "num_base_bdevs_operational": 4, 00:11:12.552 "base_bdevs_list": [ 00:11:12.552 { 00:11:12.552 "name": "NewBaseBdev", 00:11:12.552 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:12.552 "is_configured": true, 00:11:12.552 "data_offset": 2048, 00:11:12.552 "data_size": 63488 00:11:12.552 }, 00:11:12.552 { 00:11:12.552 "name": "BaseBdev2", 00:11:12.552 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:12.552 "is_configured": true, 00:11:12.552 "data_offset": 2048, 00:11:12.552 "data_size": 63488 00:11:12.552 }, 00:11:12.552 { 00:11:12.552 "name": "BaseBdev3", 00:11:12.552 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:12.552 "is_configured": true, 00:11:12.552 "data_offset": 2048, 00:11:12.552 "data_size": 63488 00:11:12.552 }, 00:11:12.552 { 00:11:12.552 "name": "BaseBdev4", 00:11:12.552 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:12.552 "is_configured": true, 00:11:12.552 "data_offset": 2048, 00:11:12.552 "data_size": 63488 00:11:12.552 } 00:11:12.552 ] 00:11:12.552 }' 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.552 08:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.121 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.121 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.122 [2024-12-13 08:22:25.282687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.122 "name": "Existed_Raid", 00:11:13.122 "aliases": [ 00:11:13.122 "a115ce00-8172-4215-b7cf-a1af87a13400" 00:11:13.122 ], 00:11:13.122 "product_name": "Raid Volume", 00:11:13.122 "block_size": 512, 00:11:13.122 "num_blocks": 253952, 00:11:13.122 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:13.122 "assigned_rate_limits": { 00:11:13.122 "rw_ios_per_sec": 0, 00:11:13.122 "rw_mbytes_per_sec": 0, 00:11:13.122 "r_mbytes_per_sec": 0, 00:11:13.122 "w_mbytes_per_sec": 0 00:11:13.122 }, 00:11:13.122 "claimed": false, 00:11:13.122 "zoned": false, 00:11:13.122 "supported_io_types": { 00:11:13.122 "read": true, 00:11:13.122 "write": true, 00:11:13.122 "unmap": true, 00:11:13.122 "flush": true, 00:11:13.122 "reset": true, 00:11:13.122 "nvme_admin": false, 00:11:13.122 "nvme_io": false, 00:11:13.122 "nvme_io_md": false, 00:11:13.122 "write_zeroes": true, 00:11:13.122 "zcopy": false, 00:11:13.122 "get_zone_info": false, 00:11:13.122 "zone_management": false, 00:11:13.122 "zone_append": false, 00:11:13.122 "compare": false, 00:11:13.122 "compare_and_write": false, 00:11:13.122 "abort": false, 00:11:13.122 "seek_hole": false, 00:11:13.122 "seek_data": false, 00:11:13.122 "copy": false, 00:11:13.122 "nvme_iov_md": false 00:11:13.122 }, 00:11:13.122 "memory_domains": [ 00:11:13.122 { 00:11:13.122 "dma_device_id": "system", 00:11:13.122 "dma_device_type": 1 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.122 "dma_device_type": 2 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "system", 00:11:13.122 "dma_device_type": 1 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.122 "dma_device_type": 2 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "system", 00:11:13.122 "dma_device_type": 1 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.122 "dma_device_type": 2 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "system", 00:11:13.122 "dma_device_type": 1 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.122 "dma_device_type": 2 00:11:13.122 } 00:11:13.122 ], 00:11:13.122 "driver_specific": { 00:11:13.122 "raid": { 00:11:13.122 "uuid": "a115ce00-8172-4215-b7cf-a1af87a13400", 00:11:13.122 "strip_size_kb": 64, 00:11:13.122 "state": "online", 00:11:13.122 "raid_level": "raid0", 00:11:13.122 "superblock": true, 00:11:13.122 "num_base_bdevs": 4, 00:11:13.122 "num_base_bdevs_discovered": 4, 00:11:13.122 "num_base_bdevs_operational": 4, 00:11:13.122 "base_bdevs_list": [ 00:11:13.122 { 00:11:13.122 "name": "NewBaseBdev", 00:11:13.122 "uuid": "ec42e977-0500-4e6c-b643-76e525985100", 00:11:13.122 "is_configured": true, 00:11:13.122 "data_offset": 2048, 00:11:13.122 "data_size": 63488 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "name": "BaseBdev2", 00:11:13.122 "uuid": "710e8965-1bce-4ecf-b5ad-0e09cc78d59a", 00:11:13.122 "is_configured": true, 00:11:13.122 "data_offset": 2048, 00:11:13.122 "data_size": 63488 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "name": "BaseBdev3", 00:11:13.122 "uuid": "8afc44f4-3012-425a-92d8-ead0c7515c5c", 00:11:13.122 "is_configured": true, 00:11:13.122 "data_offset": 2048, 00:11:13.122 "data_size": 63488 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "name": "BaseBdev4", 00:11:13.122 "uuid": "228237a0-6557-4408-a10e-605469f61c60", 00:11:13.122 "is_configured": true, 00:11:13.122 "data_offset": 2048, 00:11:13.122 "data_size": 63488 00:11:13.122 } 00:11:13.122 ] 00:11:13.122 } 00:11:13.122 } 00:11:13.122 }' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:13.122 BaseBdev2 00:11:13.122 BaseBdev3 00:11:13.122 BaseBdev4' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.122 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.381 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.382 [2024-12-13 08:22:25.601737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.382 [2024-12-13 08:22:25.601844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.382 [2024-12-13 08:22:25.601974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.382 [2024-12-13 08:22:25.602091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.382 [2024-12-13 08:22:25.602165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70219 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70219 ']' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70219 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70219 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.382 killing process with pid 70219 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70219' 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70219 00:11:13.382 [2024-12-13 08:22:25.646582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.382 08:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70219 00:11:13.950 [2024-12-13 08:22:26.104280] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.328 08:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:15.328 00:11:15.328 real 0m12.064s 00:11:15.328 user 0m18.990s 00:11:15.328 sys 0m2.142s 00:11:15.328 08:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.328 ************************************ 00:11:15.328 END TEST raid_state_function_test_sb 00:11:15.328 ************************************ 00:11:15.328 08:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.328 08:22:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:15.328 08:22:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.328 08:22:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.328 08:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.328 ************************************ 00:11:15.328 START TEST raid_superblock_test 00:11:15.328 ************************************ 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70890 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70890 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70890 ']' 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.328 08:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.328 [2024-12-13 08:22:27.539757] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:15.328 [2024-12-13 08:22:27.539908] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70890 ] 00:11:15.606 [2024-12-13 08:22:27.718159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.606 [2024-12-13 08:22:27.852921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.889 [2024-12-13 08:22:28.077995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.889 [2024-12-13 08:22:28.078078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.149 malloc1 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.149 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.409 [2024-12-13 08:22:28.517946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:16.409 [2024-12-13 08:22:28.518068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.409 [2024-12-13 08:22:28.518155] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:16.409 [2024-12-13 08:22:28.518196] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.409 [2024-12-13 08:22:28.520670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.409 [2024-12-13 08:22:28.520758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:16.409 pt1 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.409 malloc2 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.409 [2024-12-13 08:22:28.582603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.409 [2024-12-13 08:22:28.582671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.409 [2024-12-13 08:22:28.582695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:16.409 [2024-12-13 08:22:28.582706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.409 [2024-12-13 08:22:28.585188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.409 [2024-12-13 08:22:28.585232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.409 pt2 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.409 malloc3 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.409 [2024-12-13 08:22:28.652338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:16.409 [2024-12-13 08:22:28.652456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.409 [2024-12-13 08:22:28.652503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:16.409 [2024-12-13 08:22:28.652547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.409 [2024-12-13 08:22:28.655084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.409 [2024-12-13 08:22:28.655182] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:16.409 pt3 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.409 malloc4 00:11:16.409 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.410 [2024-12-13 08:22:28.717408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:16.410 [2024-12-13 08:22:28.717547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.410 [2024-12-13 08:22:28.717607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:16.410 [2024-12-13 08:22:28.717645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.410 [2024-12-13 08:22:28.720132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.410 [2024-12-13 08:22:28.720216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:16.410 pt4 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.410 [2024-12-13 08:22:28.729413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:16.410 [2024-12-13 08:22:28.731558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.410 [2024-12-13 08:22:28.731656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:16.410 [2024-12-13 08:22:28.731710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:16.410 [2024-12-13 08:22:28.731903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:16.410 [2024-12-13 08:22:28.731916] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.410 [2024-12-13 08:22:28.732215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.410 [2024-12-13 08:22:28.732404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:16.410 [2024-12-13 08:22:28.732470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:16.410 [2024-12-13 08:22:28.732648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.410 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.670 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.670 "name": "raid_bdev1", 00:11:16.670 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:16.670 "strip_size_kb": 64, 00:11:16.670 "state": "online", 00:11:16.670 "raid_level": "raid0", 00:11:16.670 "superblock": true, 00:11:16.670 "num_base_bdevs": 4, 00:11:16.670 "num_base_bdevs_discovered": 4, 00:11:16.670 "num_base_bdevs_operational": 4, 00:11:16.670 "base_bdevs_list": [ 00:11:16.670 { 00:11:16.670 "name": "pt1", 00:11:16.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.670 "is_configured": true, 00:11:16.670 "data_offset": 2048, 00:11:16.670 "data_size": 63488 00:11:16.670 }, 00:11:16.670 { 00:11:16.670 "name": "pt2", 00:11:16.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.670 "is_configured": true, 00:11:16.670 "data_offset": 2048, 00:11:16.670 "data_size": 63488 00:11:16.670 }, 00:11:16.670 { 00:11:16.670 "name": "pt3", 00:11:16.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.670 "is_configured": true, 00:11:16.670 "data_offset": 2048, 00:11:16.670 "data_size": 63488 00:11:16.670 }, 00:11:16.670 { 00:11:16.670 "name": "pt4", 00:11:16.670 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.670 "is_configured": true, 00:11:16.670 "data_offset": 2048, 00:11:16.670 "data_size": 63488 00:11:16.670 } 00:11:16.670 ] 00:11:16.670 }' 00:11:16.670 08:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.670 08:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.930 [2024-12-13 08:22:29.224981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.930 "name": "raid_bdev1", 00:11:16.930 "aliases": [ 00:11:16.930 "e85fbf92-8712-4ae3-b244-e080181fe9fc" 00:11:16.930 ], 00:11:16.930 "product_name": "Raid Volume", 00:11:16.930 "block_size": 512, 00:11:16.930 "num_blocks": 253952, 00:11:16.930 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:16.930 "assigned_rate_limits": { 00:11:16.930 "rw_ios_per_sec": 0, 00:11:16.930 "rw_mbytes_per_sec": 0, 00:11:16.930 "r_mbytes_per_sec": 0, 00:11:16.930 "w_mbytes_per_sec": 0 00:11:16.930 }, 00:11:16.930 "claimed": false, 00:11:16.930 "zoned": false, 00:11:16.930 "supported_io_types": { 00:11:16.930 "read": true, 00:11:16.930 "write": true, 00:11:16.930 "unmap": true, 00:11:16.930 "flush": true, 00:11:16.930 "reset": true, 00:11:16.930 "nvme_admin": false, 00:11:16.930 "nvme_io": false, 00:11:16.930 "nvme_io_md": false, 00:11:16.930 "write_zeroes": true, 00:11:16.930 "zcopy": false, 00:11:16.930 "get_zone_info": false, 00:11:16.930 "zone_management": false, 00:11:16.930 "zone_append": false, 00:11:16.930 "compare": false, 00:11:16.930 "compare_and_write": false, 00:11:16.930 "abort": false, 00:11:16.930 "seek_hole": false, 00:11:16.930 "seek_data": false, 00:11:16.930 "copy": false, 00:11:16.930 "nvme_iov_md": false 00:11:16.930 }, 00:11:16.930 "memory_domains": [ 00:11:16.930 { 00:11:16.930 "dma_device_id": "system", 00:11:16.930 "dma_device_type": 1 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.930 "dma_device_type": 2 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "dma_device_id": "system", 00:11:16.930 "dma_device_type": 1 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.930 "dma_device_type": 2 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "dma_device_id": "system", 00:11:16.930 "dma_device_type": 1 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.930 "dma_device_type": 2 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "dma_device_id": "system", 00:11:16.930 "dma_device_type": 1 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.930 "dma_device_type": 2 00:11:16.930 } 00:11:16.930 ], 00:11:16.930 "driver_specific": { 00:11:16.930 "raid": { 00:11:16.930 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:16.930 "strip_size_kb": 64, 00:11:16.930 "state": "online", 00:11:16.930 "raid_level": "raid0", 00:11:16.930 "superblock": true, 00:11:16.930 "num_base_bdevs": 4, 00:11:16.930 "num_base_bdevs_discovered": 4, 00:11:16.930 "num_base_bdevs_operational": 4, 00:11:16.930 "base_bdevs_list": [ 00:11:16.930 { 00:11:16.930 "name": "pt1", 00:11:16.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.930 "is_configured": true, 00:11:16.930 "data_offset": 2048, 00:11:16.930 "data_size": 63488 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "name": "pt2", 00:11:16.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.930 "is_configured": true, 00:11:16.930 "data_offset": 2048, 00:11:16.930 "data_size": 63488 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "name": "pt3", 00:11:16.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.930 "is_configured": true, 00:11:16.930 "data_offset": 2048, 00:11:16.930 "data_size": 63488 00:11:16.930 }, 00:11:16.930 { 00:11:16.930 "name": "pt4", 00:11:16.930 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.930 "is_configured": true, 00:11:16.930 "data_offset": 2048, 00:11:16.930 "data_size": 63488 00:11:16.930 } 00:11:16.930 ] 00:11:16.930 } 00:11:16.930 } 00:11:16.930 }' 00:11:16.930 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:17.190 pt2 00:11:17.190 pt3 00:11:17.190 pt4' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.190 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.191 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 [2024-12-13 08:22:29.600360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e85fbf92-8712-4ae3-b244-e080181fe9fc 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e85fbf92-8712-4ae3-b244-e080181fe9fc ']' 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 [2024-12-13 08:22:29.631916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.450 [2024-12-13 08:22:29.632002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.450 [2024-12-13 08:22:29.632142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.450 [2024-12-13 08:22:29.632253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.450 [2024-12-13 08:22:29.632312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.450 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.451 [2024-12-13 08:22:29.787715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:17.451 [2024-12-13 08:22:29.789862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:17.451 [2024-12-13 08:22:29.789918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:17.451 [2024-12-13 08:22:29.789956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:17.451 [2024-12-13 08:22:29.790012] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:17.451 [2024-12-13 08:22:29.790068] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:17.451 [2024-12-13 08:22:29.790090] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:17.451 [2024-12-13 08:22:29.790130] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:17.451 [2024-12-13 08:22:29.790147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.451 [2024-12-13 08:22:29.790162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:17.451 request: 00:11:17.451 { 00:11:17.451 "name": "raid_bdev1", 00:11:17.451 "raid_level": "raid0", 00:11:17.451 "base_bdevs": [ 00:11:17.451 "malloc1", 00:11:17.451 "malloc2", 00:11:17.451 "malloc3", 00:11:17.451 "malloc4" 00:11:17.451 ], 00:11:17.451 "strip_size_kb": 64, 00:11:17.451 "superblock": false, 00:11:17.451 "method": "bdev_raid_create", 00:11:17.451 "req_id": 1 00:11:17.451 } 00:11:17.451 Got JSON-RPC error response 00:11:17.451 response: 00:11:17.451 { 00:11:17.451 "code": -17, 00:11:17.451 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:17.451 } 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.451 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.710 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:17.710 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.711 [2024-12-13 08:22:29.851548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.711 [2024-12-13 08:22:29.851695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.711 [2024-12-13 08:22:29.851719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.711 [2024-12-13 08:22:29.851733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.711 [2024-12-13 08:22:29.854241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.711 [2024-12-13 08:22:29.854289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.711 [2024-12-13 08:22:29.854390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:17.711 [2024-12-13 08:22:29.854456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.711 pt1 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.711 "name": "raid_bdev1", 00:11:17.711 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:17.711 "strip_size_kb": 64, 00:11:17.711 "state": "configuring", 00:11:17.711 "raid_level": "raid0", 00:11:17.711 "superblock": true, 00:11:17.711 "num_base_bdevs": 4, 00:11:17.711 "num_base_bdevs_discovered": 1, 00:11:17.711 "num_base_bdevs_operational": 4, 00:11:17.711 "base_bdevs_list": [ 00:11:17.711 { 00:11:17.711 "name": "pt1", 00:11:17.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.711 "is_configured": true, 00:11:17.711 "data_offset": 2048, 00:11:17.711 "data_size": 63488 00:11:17.711 }, 00:11:17.711 { 00:11:17.711 "name": null, 00:11:17.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.711 "is_configured": false, 00:11:17.711 "data_offset": 2048, 00:11:17.711 "data_size": 63488 00:11:17.711 }, 00:11:17.711 { 00:11:17.711 "name": null, 00:11:17.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.711 "is_configured": false, 00:11:17.711 "data_offset": 2048, 00:11:17.711 "data_size": 63488 00:11:17.711 }, 00:11:17.711 { 00:11:17.711 "name": null, 00:11:17.711 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.711 "is_configured": false, 00:11:17.711 "data_offset": 2048, 00:11:17.711 "data_size": 63488 00:11:17.711 } 00:11:17.711 ] 00:11:17.711 }' 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.711 08:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.279 [2024-12-13 08:22:30.354772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.279 [2024-12-13 08:22:30.354933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.279 [2024-12-13 08:22:30.354984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:18.279 [2024-12-13 08:22:30.355029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.279 [2024-12-13 08:22:30.355563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.279 [2024-12-13 08:22:30.355631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.279 [2024-12-13 08:22:30.355760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.279 [2024-12-13 08:22:30.355819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.279 pt2 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.279 [2024-12-13 08:22:30.366763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.279 "name": "raid_bdev1", 00:11:18.279 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:18.279 "strip_size_kb": 64, 00:11:18.279 "state": "configuring", 00:11:18.279 "raid_level": "raid0", 00:11:18.279 "superblock": true, 00:11:18.279 "num_base_bdevs": 4, 00:11:18.279 "num_base_bdevs_discovered": 1, 00:11:18.279 "num_base_bdevs_operational": 4, 00:11:18.279 "base_bdevs_list": [ 00:11:18.279 { 00:11:18.279 "name": "pt1", 00:11:18.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.279 "is_configured": true, 00:11:18.279 "data_offset": 2048, 00:11:18.279 "data_size": 63488 00:11:18.279 }, 00:11:18.279 { 00:11:18.279 "name": null, 00:11:18.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.279 "is_configured": false, 00:11:18.279 "data_offset": 0, 00:11:18.279 "data_size": 63488 00:11:18.279 }, 00:11:18.279 { 00:11:18.279 "name": null, 00:11:18.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.279 "is_configured": false, 00:11:18.279 "data_offset": 2048, 00:11:18.279 "data_size": 63488 00:11:18.279 }, 00:11:18.279 { 00:11:18.279 "name": null, 00:11:18.279 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.279 "is_configured": false, 00:11:18.279 "data_offset": 2048, 00:11:18.279 "data_size": 63488 00:11:18.279 } 00:11:18.279 ] 00:11:18.279 }' 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.279 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.539 [2024-12-13 08:22:30.806059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.539 [2024-12-13 08:22:30.806164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.539 [2024-12-13 08:22:30.806188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:18.539 [2024-12-13 08:22:30.806199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.539 [2024-12-13 08:22:30.806736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.539 [2024-12-13 08:22:30.806771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.539 [2024-12-13 08:22:30.806887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.539 [2024-12-13 08:22:30.806913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.539 pt2 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.539 [2024-12-13 08:22:30.818010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:18.539 [2024-12-13 08:22:30.818098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.539 [2024-12-13 08:22:30.818156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:18.539 [2024-12-13 08:22:30.818168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.539 [2024-12-13 08:22:30.818660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.539 [2024-12-13 08:22:30.818696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:18.539 [2024-12-13 08:22:30.818794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:18.539 [2024-12-13 08:22:30.818828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:18.539 pt3 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.539 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.539 [2024-12-13 08:22:30.829973] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:18.539 [2024-12-13 08:22:30.830043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.539 [2024-12-13 08:22:30.830067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:18.539 [2024-12-13 08:22:30.830077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.539 [2024-12-13 08:22:30.830607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.539 [2024-12-13 08:22:30.830643] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:18.539 [2024-12-13 08:22:30.830745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:18.539 [2024-12-13 08:22:30.830775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:18.539 [2024-12-13 08:22:30.830946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.539 [2024-12-13 08:22:30.830962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:18.539 [2024-12-13 08:22:30.831275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:18.539 [2024-12-13 08:22:30.831455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.539 [2024-12-13 08:22:30.831470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:18.540 [2024-12-13 08:22:30.831643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.540 pt4 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.540 "name": "raid_bdev1", 00:11:18.540 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:18.540 "strip_size_kb": 64, 00:11:18.540 "state": "online", 00:11:18.540 "raid_level": "raid0", 00:11:18.540 "superblock": true, 00:11:18.540 "num_base_bdevs": 4, 00:11:18.540 "num_base_bdevs_discovered": 4, 00:11:18.540 "num_base_bdevs_operational": 4, 00:11:18.540 "base_bdevs_list": [ 00:11:18.540 { 00:11:18.540 "name": "pt1", 00:11:18.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.540 "is_configured": true, 00:11:18.540 "data_offset": 2048, 00:11:18.540 "data_size": 63488 00:11:18.540 }, 00:11:18.540 { 00:11:18.540 "name": "pt2", 00:11:18.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.540 "is_configured": true, 00:11:18.540 "data_offset": 2048, 00:11:18.540 "data_size": 63488 00:11:18.540 }, 00:11:18.540 { 00:11:18.540 "name": "pt3", 00:11:18.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.540 "is_configured": true, 00:11:18.540 "data_offset": 2048, 00:11:18.540 "data_size": 63488 00:11:18.540 }, 00:11:18.540 { 00:11:18.540 "name": "pt4", 00:11:18.540 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.540 "is_configured": true, 00:11:18.540 "data_offset": 2048, 00:11:18.540 "data_size": 63488 00:11:18.540 } 00:11:18.540 ] 00:11:18.540 }' 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.540 08:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.108 [2024-12-13 08:22:31.293667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.108 "name": "raid_bdev1", 00:11:19.108 "aliases": [ 00:11:19.108 "e85fbf92-8712-4ae3-b244-e080181fe9fc" 00:11:19.108 ], 00:11:19.108 "product_name": "Raid Volume", 00:11:19.108 "block_size": 512, 00:11:19.108 "num_blocks": 253952, 00:11:19.108 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:19.108 "assigned_rate_limits": { 00:11:19.108 "rw_ios_per_sec": 0, 00:11:19.108 "rw_mbytes_per_sec": 0, 00:11:19.108 "r_mbytes_per_sec": 0, 00:11:19.108 "w_mbytes_per_sec": 0 00:11:19.108 }, 00:11:19.108 "claimed": false, 00:11:19.108 "zoned": false, 00:11:19.108 "supported_io_types": { 00:11:19.108 "read": true, 00:11:19.108 "write": true, 00:11:19.108 "unmap": true, 00:11:19.108 "flush": true, 00:11:19.108 "reset": true, 00:11:19.108 "nvme_admin": false, 00:11:19.108 "nvme_io": false, 00:11:19.108 "nvme_io_md": false, 00:11:19.108 "write_zeroes": true, 00:11:19.108 "zcopy": false, 00:11:19.108 "get_zone_info": false, 00:11:19.108 "zone_management": false, 00:11:19.108 "zone_append": false, 00:11:19.108 "compare": false, 00:11:19.108 "compare_and_write": false, 00:11:19.108 "abort": false, 00:11:19.108 "seek_hole": false, 00:11:19.108 "seek_data": false, 00:11:19.108 "copy": false, 00:11:19.108 "nvme_iov_md": false 00:11:19.108 }, 00:11:19.108 "memory_domains": [ 00:11:19.108 { 00:11:19.108 "dma_device_id": "system", 00:11:19.108 "dma_device_type": 1 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.108 "dma_device_type": 2 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "dma_device_id": "system", 00:11:19.108 "dma_device_type": 1 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.108 "dma_device_type": 2 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "dma_device_id": "system", 00:11:19.108 "dma_device_type": 1 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.108 "dma_device_type": 2 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "dma_device_id": "system", 00:11:19.108 "dma_device_type": 1 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.108 "dma_device_type": 2 00:11:19.108 } 00:11:19.108 ], 00:11:19.108 "driver_specific": { 00:11:19.108 "raid": { 00:11:19.108 "uuid": "e85fbf92-8712-4ae3-b244-e080181fe9fc", 00:11:19.108 "strip_size_kb": 64, 00:11:19.108 "state": "online", 00:11:19.108 "raid_level": "raid0", 00:11:19.108 "superblock": true, 00:11:19.108 "num_base_bdevs": 4, 00:11:19.108 "num_base_bdevs_discovered": 4, 00:11:19.108 "num_base_bdevs_operational": 4, 00:11:19.108 "base_bdevs_list": [ 00:11:19.108 { 00:11:19.108 "name": "pt1", 00:11:19.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.108 "is_configured": true, 00:11:19.108 "data_offset": 2048, 00:11:19.108 "data_size": 63488 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "name": "pt2", 00:11:19.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.108 "is_configured": true, 00:11:19.108 "data_offset": 2048, 00:11:19.108 "data_size": 63488 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "name": "pt3", 00:11:19.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.108 "is_configured": true, 00:11:19.108 "data_offset": 2048, 00:11:19.108 "data_size": 63488 00:11:19.108 }, 00:11:19.108 { 00:11:19.108 "name": "pt4", 00:11:19.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.108 "is_configured": true, 00:11:19.108 "data_offset": 2048, 00:11:19.108 "data_size": 63488 00:11:19.108 } 00:11:19.108 ] 00:11:19.108 } 00:11:19.108 } 00:11:19.108 }' 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:19.108 pt2 00:11:19.108 pt3 00:11:19.108 pt4' 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.108 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.368 [2024-12-13 08:22:31.621038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e85fbf92-8712-4ae3-b244-e080181fe9fc '!=' e85fbf92-8712-4ae3-b244-e080181fe9fc ']' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70890 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70890 ']' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70890 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70890 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70890' 00:11:19.368 killing process with pid 70890 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70890 00:11:19.368 [2024-12-13 08:22:31.705845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.368 [2024-12-13 08:22:31.706022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.368 08:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70890 00:11:19.368 [2024-12-13 08:22:31.706161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.368 [2024-12-13 08:22:31.706219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:19.937 [2024-12-13 08:22:32.172787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.317 08:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:21.317 00:11:21.317 real 0m6.059s 00:11:21.317 user 0m8.609s 00:11:21.317 sys 0m1.023s 00:11:21.317 08:22:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.317 08:22:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.317 ************************************ 00:11:21.317 END TEST raid_superblock_test 00:11:21.317 ************************************ 00:11:21.317 08:22:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:21.317 08:22:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.317 08:22:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.317 08:22:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.317 ************************************ 00:11:21.317 START TEST raid_read_error_test 00:11:21.317 ************************************ 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uodjurabKN 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71162 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71162 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71162 ']' 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.317 08:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.317 [2024-12-13 08:22:33.674292] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:21.317 [2024-12-13 08:22:33.674498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71162 ] 00:11:21.576 [2024-12-13 08:22:33.845731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.834 [2024-12-13 08:22:33.979510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.093 [2024-12-13 08:22:34.201789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.093 [2024-12-13 08:22:34.201937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.352 BaseBdev1_malloc 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.352 true 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.352 [2024-12-13 08:22:34.620535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:22.352 [2024-12-13 08:22:34.620642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.352 [2024-12-13 08:22:34.620667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:22.352 [2024-12-13 08:22:34.620678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.352 [2024-12-13 08:22:34.623032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.352 [2024-12-13 08:22:34.623083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:22.352 BaseBdev1 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.352 BaseBdev2_malloc 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.352 true 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.352 [2024-12-13 08:22:34.686254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:22.352 [2024-12-13 08:22:34.686323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.352 [2024-12-13 08:22:34.686345] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:22.352 [2024-12-13 08:22:34.686356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.352 [2024-12-13 08:22:34.688768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.352 [2024-12-13 08:22:34.688885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:22.352 BaseBdev2 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.352 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:22.353 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.353 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.612 BaseBdev3_malloc 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.612 true 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.612 [2024-12-13 08:22:34.768378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:22.612 [2024-12-13 08:22:34.768465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.612 [2024-12-13 08:22:34.768507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:22.612 [2024-12-13 08:22:34.768518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.612 [2024-12-13 08:22:34.770970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.612 [2024-12-13 08:22:34.771029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:22.612 BaseBdev3 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:22.612 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.613 BaseBdev4_malloc 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.613 true 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.613 [2024-12-13 08:22:34.833415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:22.613 [2024-12-13 08:22:34.833479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.613 [2024-12-13 08:22:34.833505] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:22.613 [2024-12-13 08:22:34.833516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.613 [2024-12-13 08:22:34.836003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.613 [2024-12-13 08:22:34.836133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:22.613 BaseBdev4 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.613 [2024-12-13 08:22:34.845461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.613 [2024-12-13 08:22:34.847501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.613 [2024-12-13 08:22:34.847680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.613 [2024-12-13 08:22:34.847776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.613 [2024-12-13 08:22:34.848061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:22.613 [2024-12-13 08:22:34.848083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:22.613 [2024-12-13 08:22:34.848412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:22.613 [2024-12-13 08:22:34.848630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:22.613 [2024-12-13 08:22:34.848645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:22.613 [2024-12-13 08:22:34.848830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.613 "name": "raid_bdev1", 00:11:22.613 "uuid": "69825437-cb70-45ef-8f9d-884d339bd593", 00:11:22.613 "strip_size_kb": 64, 00:11:22.613 "state": "online", 00:11:22.613 "raid_level": "raid0", 00:11:22.613 "superblock": true, 00:11:22.613 "num_base_bdevs": 4, 00:11:22.613 "num_base_bdevs_discovered": 4, 00:11:22.613 "num_base_bdevs_operational": 4, 00:11:22.613 "base_bdevs_list": [ 00:11:22.613 { 00:11:22.613 "name": "BaseBdev1", 00:11:22.613 "uuid": "d1cb0830-85a4-57e6-94b2-701f153a4520", 00:11:22.613 "is_configured": true, 00:11:22.613 "data_offset": 2048, 00:11:22.613 "data_size": 63488 00:11:22.613 }, 00:11:22.613 { 00:11:22.613 "name": "BaseBdev2", 00:11:22.613 "uuid": "6957f564-cac4-5070-a9ca-90543009afaf", 00:11:22.613 "is_configured": true, 00:11:22.613 "data_offset": 2048, 00:11:22.613 "data_size": 63488 00:11:22.613 }, 00:11:22.613 { 00:11:22.613 "name": "BaseBdev3", 00:11:22.613 "uuid": "55e9fc19-93ee-560e-af5d-62f9832ea86b", 00:11:22.613 "is_configured": true, 00:11:22.613 "data_offset": 2048, 00:11:22.613 "data_size": 63488 00:11:22.613 }, 00:11:22.613 { 00:11:22.613 "name": "BaseBdev4", 00:11:22.613 "uuid": "49a2f3df-ba5e-556a-9b90-18cfb2ef7271", 00:11:22.613 "is_configured": true, 00:11:22.613 "data_offset": 2048, 00:11:22.613 "data_size": 63488 00:11:22.613 } 00:11:22.613 ] 00:11:22.613 }' 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.613 08:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.183 08:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:23.183 08:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:23.183 [2024-12-13 08:22:35.389857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.118 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.118 "name": "raid_bdev1", 00:11:24.118 "uuid": "69825437-cb70-45ef-8f9d-884d339bd593", 00:11:24.118 "strip_size_kb": 64, 00:11:24.118 "state": "online", 00:11:24.118 "raid_level": "raid0", 00:11:24.118 "superblock": true, 00:11:24.118 "num_base_bdevs": 4, 00:11:24.118 "num_base_bdevs_discovered": 4, 00:11:24.118 "num_base_bdevs_operational": 4, 00:11:24.118 "base_bdevs_list": [ 00:11:24.119 { 00:11:24.119 "name": "BaseBdev1", 00:11:24.119 "uuid": "d1cb0830-85a4-57e6-94b2-701f153a4520", 00:11:24.119 "is_configured": true, 00:11:24.119 "data_offset": 2048, 00:11:24.119 "data_size": 63488 00:11:24.119 }, 00:11:24.119 { 00:11:24.119 "name": "BaseBdev2", 00:11:24.119 "uuid": "6957f564-cac4-5070-a9ca-90543009afaf", 00:11:24.119 "is_configured": true, 00:11:24.119 "data_offset": 2048, 00:11:24.119 "data_size": 63488 00:11:24.119 }, 00:11:24.119 { 00:11:24.119 "name": "BaseBdev3", 00:11:24.119 "uuid": "55e9fc19-93ee-560e-af5d-62f9832ea86b", 00:11:24.119 "is_configured": true, 00:11:24.119 "data_offset": 2048, 00:11:24.119 "data_size": 63488 00:11:24.119 }, 00:11:24.119 { 00:11:24.119 "name": "BaseBdev4", 00:11:24.119 "uuid": "49a2f3df-ba5e-556a-9b90-18cfb2ef7271", 00:11:24.119 "is_configured": true, 00:11:24.119 "data_offset": 2048, 00:11:24.119 "data_size": 63488 00:11:24.119 } 00:11:24.119 ] 00:11:24.119 }' 00:11:24.119 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.119 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.687 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.688 [2024-12-13 08:22:36.754280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:24.688 [2024-12-13 08:22:36.754378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.688 [2024-12-13 08:22:36.757653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.688 [2024-12-13 08:22:36.757760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.688 [2024-12-13 08:22:36.757832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.688 [2024-12-13 08:22:36.757932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.688 { 00:11:24.688 "results": [ 00:11:24.688 { 00:11:24.688 "job": "raid_bdev1", 00:11:24.688 "core_mask": "0x1", 00:11:24.688 "workload": "randrw", 00:11:24.688 "percentage": 50, 00:11:24.688 "status": "finished", 00:11:24.688 "queue_depth": 1, 00:11:24.688 "io_size": 131072, 00:11:24.688 "runtime": 1.365358, 00:11:24.688 "iops": 14333.969552307894, 00:11:24.688 "mibps": 1791.7461940384867, 00:11:24.688 "io_failed": 1, 00:11:24.688 "io_timeout": 0, 00:11:24.688 "avg_latency_us": 96.6919272430002, 00:11:24.688 "min_latency_us": 27.83580786026201, 00:11:24.688 "max_latency_us": 1545.3903930131005 00:11:24.688 } 00:11:24.688 ], 00:11:24.688 "core_count": 1 00:11:24.688 } 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71162 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71162 ']' 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71162 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71162 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.688 killing process with pid 71162 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71162' 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71162 00:11:24.688 [2024-12-13 08:22:36.807092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.688 08:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71162 00:11:24.953 [2024-12-13 08:22:37.164265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uodjurabKN 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.331 ************************************ 00:11:26.331 END TEST raid_read_error_test 00:11:26.331 ************************************ 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:26.331 00:11:26.331 real 0m4.885s 00:11:26.331 user 0m5.784s 00:11:26.331 sys 0m0.592s 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.331 08:22:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.331 08:22:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:26.331 08:22:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:26.331 08:22:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.331 08:22:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.331 ************************************ 00:11:26.331 START TEST raid_write_error_test 00:11:26.331 ************************************ 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.49xFIUiXvT 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71308 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71308 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71308 ']' 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.331 08:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.331 [2024-12-13 08:22:38.628522] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:26.331 [2024-12-13 08:22:38.628726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71308 ] 00:11:26.590 [2024-12-13 08:22:38.804688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.590 [2024-12-13 08:22:38.932299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.849 [2024-12-13 08:22:39.152404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.849 [2024-12-13 08:22:39.152468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 BaseBdev1_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 true 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 [2024-12-13 08:22:39.537548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:27.417 [2024-12-13 08:22:39.537607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.417 [2024-12-13 08:22:39.537628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:27.417 [2024-12-13 08:22:39.537638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.417 [2024-12-13 08:22:39.539932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.417 [2024-12-13 08:22:39.540040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:27.417 BaseBdev1 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 BaseBdev2_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 true 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 [2024-12-13 08:22:39.607628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:27.417 [2024-12-13 08:22:39.607692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.417 [2024-12-13 08:22:39.607710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:27.417 [2024-12-13 08:22:39.607722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.417 [2024-12-13 08:22:39.610075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.417 [2024-12-13 08:22:39.610129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:27.417 BaseBdev2 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 BaseBdev3_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 true 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.417 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.417 [2024-12-13 08:22:39.686297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:27.418 [2024-12-13 08:22:39.686375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.418 [2024-12-13 08:22:39.686401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:27.418 [2024-12-13 08:22:39.686414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.418 [2024-12-13 08:22:39.688946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.418 [2024-12-13 08:22:39.688994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:27.418 BaseBdev3 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.418 BaseBdev4_malloc 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.418 true 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.418 [2024-12-13 08:22:39.756318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:27.418 [2024-12-13 08:22:39.756454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.418 [2024-12-13 08:22:39.756479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:27.418 [2024-12-13 08:22:39.756491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.418 [2024-12-13 08:22:39.758863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.418 [2024-12-13 08:22:39.758932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:27.418 BaseBdev4 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.418 [2024-12-13 08:22:39.768430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.418 [2024-12-13 08:22:39.770433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.418 [2024-12-13 08:22:39.770513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.418 [2024-12-13 08:22:39.770576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.418 [2024-12-13 08:22:39.770804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:27.418 [2024-12-13 08:22:39.770822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:27.418 [2024-12-13 08:22:39.771124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:27.418 [2024-12-13 08:22:39.771314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:27.418 [2024-12-13 08:22:39.771326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:27.418 [2024-12-13 08:22:39.771534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.418 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.677 "name": "raid_bdev1", 00:11:27.677 "uuid": "18053706-b129-45d5-ac3a-acb48db541f8", 00:11:27.677 "strip_size_kb": 64, 00:11:27.677 "state": "online", 00:11:27.677 "raid_level": "raid0", 00:11:27.677 "superblock": true, 00:11:27.677 "num_base_bdevs": 4, 00:11:27.677 "num_base_bdevs_discovered": 4, 00:11:27.677 "num_base_bdevs_operational": 4, 00:11:27.677 "base_bdevs_list": [ 00:11:27.677 { 00:11:27.677 "name": "BaseBdev1", 00:11:27.677 "uuid": "207593c8-52e5-510f-b844-03a110d142ff", 00:11:27.677 "is_configured": true, 00:11:27.677 "data_offset": 2048, 00:11:27.677 "data_size": 63488 00:11:27.677 }, 00:11:27.677 { 00:11:27.677 "name": "BaseBdev2", 00:11:27.677 "uuid": "b4a17645-35f6-5853-bb26-8026f4f47703", 00:11:27.677 "is_configured": true, 00:11:27.677 "data_offset": 2048, 00:11:27.677 "data_size": 63488 00:11:27.677 }, 00:11:27.677 { 00:11:27.677 "name": "BaseBdev3", 00:11:27.677 "uuid": "fcffa936-109a-5783-bcc6-50b6ee304eb1", 00:11:27.677 "is_configured": true, 00:11:27.677 "data_offset": 2048, 00:11:27.677 "data_size": 63488 00:11:27.677 }, 00:11:27.677 { 00:11:27.677 "name": "BaseBdev4", 00:11:27.677 "uuid": "e5d34c37-a5e2-5774-9557-00a3f9ea506a", 00:11:27.677 "is_configured": true, 00:11:27.677 "data_offset": 2048, 00:11:27.677 "data_size": 63488 00:11:27.677 } 00:11:27.677 ] 00:11:27.677 }' 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.677 08:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.935 08:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:27.935 08:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:28.211 [2024-12-13 08:22:40.360750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.147 "name": "raid_bdev1", 00:11:29.147 "uuid": "18053706-b129-45d5-ac3a-acb48db541f8", 00:11:29.147 "strip_size_kb": 64, 00:11:29.147 "state": "online", 00:11:29.147 "raid_level": "raid0", 00:11:29.147 "superblock": true, 00:11:29.147 "num_base_bdevs": 4, 00:11:29.147 "num_base_bdevs_discovered": 4, 00:11:29.147 "num_base_bdevs_operational": 4, 00:11:29.147 "base_bdevs_list": [ 00:11:29.147 { 00:11:29.147 "name": "BaseBdev1", 00:11:29.147 "uuid": "207593c8-52e5-510f-b844-03a110d142ff", 00:11:29.147 "is_configured": true, 00:11:29.147 "data_offset": 2048, 00:11:29.147 "data_size": 63488 00:11:29.147 }, 00:11:29.147 { 00:11:29.147 "name": "BaseBdev2", 00:11:29.147 "uuid": "b4a17645-35f6-5853-bb26-8026f4f47703", 00:11:29.147 "is_configured": true, 00:11:29.147 "data_offset": 2048, 00:11:29.147 "data_size": 63488 00:11:29.147 }, 00:11:29.147 { 00:11:29.147 "name": "BaseBdev3", 00:11:29.147 "uuid": "fcffa936-109a-5783-bcc6-50b6ee304eb1", 00:11:29.147 "is_configured": true, 00:11:29.147 "data_offset": 2048, 00:11:29.147 "data_size": 63488 00:11:29.147 }, 00:11:29.147 { 00:11:29.147 "name": "BaseBdev4", 00:11:29.147 "uuid": "e5d34c37-a5e2-5774-9557-00a3f9ea506a", 00:11:29.147 "is_configured": true, 00:11:29.147 "data_offset": 2048, 00:11:29.147 "data_size": 63488 00:11:29.147 } 00:11:29.147 ] 00:11:29.147 }' 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.147 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.406 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.406 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.406 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.406 [2024-12-13 08:22:41.768155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.406 [2024-12-13 08:22:41.768189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.664 [2024-12-13 08:22:41.771257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.664 [2024-12-13 08:22:41.771373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.664 [2024-12-13 08:22:41.771430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.664 [2024-12-13 08:22:41.771444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:29.664 { 00:11:29.664 "results": [ 00:11:29.664 { 00:11:29.664 "job": "raid_bdev1", 00:11:29.664 "core_mask": "0x1", 00:11:29.664 "workload": "randrw", 00:11:29.664 "percentage": 50, 00:11:29.664 "status": "finished", 00:11:29.664 "queue_depth": 1, 00:11:29.664 "io_size": 131072, 00:11:29.664 "runtime": 1.4081, 00:11:29.664 "iops": 14144.592003408849, 00:11:29.664 "mibps": 1768.074000426106, 00:11:29.664 "io_failed": 1, 00:11:29.664 "io_timeout": 0, 00:11:29.664 "avg_latency_us": 97.81160995891014, 00:11:29.664 "min_latency_us": 27.83580786026201, 00:11:29.664 "max_latency_us": 1581.1633187772925 00:11:29.664 } 00:11:29.664 ], 00:11:29.664 "core_count": 1 00:11:29.664 } 00:11:29.664 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.664 08:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71308 00:11:29.664 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71308 ']' 00:11:29.664 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71308 00:11:29.664 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:29.664 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.664 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71308 00:11:29.665 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.665 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.665 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71308' 00:11:29.665 killing process with pid 71308 00:11:29.665 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71308 00:11:29.665 [2024-12-13 08:22:41.816921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.665 08:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71308 00:11:29.923 [2024-12-13 08:22:42.165796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.49xFIUiXvT 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:31.298 ************************************ 00:11:31.298 END TEST raid_write_error_test 00:11:31.298 ************************************ 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:31.298 00:11:31.298 real 0m4.943s 00:11:31.298 user 0m5.867s 00:11:31.298 sys 0m0.595s 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.298 08:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.298 08:22:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:31.298 08:22:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:31.298 08:22:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:31.298 08:22:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.298 08:22:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.298 ************************************ 00:11:31.298 START TEST raid_state_function_test 00:11:31.298 ************************************ 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71460 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71460' 00:11:31.298 Process raid pid: 71460 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71460 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71460 ']' 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.298 08:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.298 [2024-12-13 08:22:43.628154] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:31.298 [2024-12-13 08:22:43.628273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.556 [2024-12-13 08:22:43.805828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.815 [2024-12-13 08:22:43.935302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.815 [2024-12-13 08:22:44.159089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.815 [2024-12-13 08:22:44.159204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.382 [2024-12-13 08:22:44.504198] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.382 [2024-12-13 08:22:44.504257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.382 [2024-12-13 08:22:44.504269] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.382 [2024-12-13 08:22:44.504281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.382 [2024-12-13 08:22:44.504288] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.382 [2024-12-13 08:22:44.504298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.382 [2024-12-13 08:22:44.504305] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.382 [2024-12-13 08:22:44.504314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.382 "name": "Existed_Raid", 00:11:32.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.382 "strip_size_kb": 64, 00:11:32.382 "state": "configuring", 00:11:32.382 "raid_level": "concat", 00:11:32.382 "superblock": false, 00:11:32.382 "num_base_bdevs": 4, 00:11:32.382 "num_base_bdevs_discovered": 0, 00:11:32.382 "num_base_bdevs_operational": 4, 00:11:32.382 "base_bdevs_list": [ 00:11:32.382 { 00:11:32.382 "name": "BaseBdev1", 00:11:32.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.382 "is_configured": false, 00:11:32.382 "data_offset": 0, 00:11:32.382 "data_size": 0 00:11:32.382 }, 00:11:32.382 { 00:11:32.382 "name": "BaseBdev2", 00:11:32.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.382 "is_configured": false, 00:11:32.382 "data_offset": 0, 00:11:32.382 "data_size": 0 00:11:32.382 }, 00:11:32.382 { 00:11:32.382 "name": "BaseBdev3", 00:11:32.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.382 "is_configured": false, 00:11:32.382 "data_offset": 0, 00:11:32.382 "data_size": 0 00:11:32.382 }, 00:11:32.382 { 00:11:32.382 "name": "BaseBdev4", 00:11:32.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.382 "is_configured": false, 00:11:32.382 "data_offset": 0, 00:11:32.382 "data_size": 0 00:11:32.382 } 00:11:32.382 ] 00:11:32.382 }' 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.382 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.641 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.641 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.641 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.641 [2024-12-13 08:22:44.939368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.641 [2024-12-13 08:22:44.939464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:32.641 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.641 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.642 [2024-12-13 08:22:44.947358] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.642 [2024-12-13 08:22:44.947452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.642 [2024-12-13 08:22:44.947495] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.642 [2024-12-13 08:22:44.947533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.642 [2024-12-13 08:22:44.947565] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.642 [2024-12-13 08:22:44.947593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.642 [2024-12-13 08:22:44.947626] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.642 [2024-12-13 08:22:44.947662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.642 [2024-12-13 08:22:44.997079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.642 BaseBdev1 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.642 08:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.642 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.642 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.642 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.642 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.903 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.903 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.903 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.903 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.903 [ 00:11:32.903 { 00:11:32.903 "name": "BaseBdev1", 00:11:32.903 "aliases": [ 00:11:32.903 "d1b660a9-39c8-4db7-9b40-0dca4c1670d3" 00:11:32.903 ], 00:11:32.903 "product_name": "Malloc disk", 00:11:32.903 "block_size": 512, 00:11:32.903 "num_blocks": 65536, 00:11:32.903 "uuid": "d1b660a9-39c8-4db7-9b40-0dca4c1670d3", 00:11:32.903 "assigned_rate_limits": { 00:11:32.903 "rw_ios_per_sec": 0, 00:11:32.903 "rw_mbytes_per_sec": 0, 00:11:32.903 "r_mbytes_per_sec": 0, 00:11:32.903 "w_mbytes_per_sec": 0 00:11:32.903 }, 00:11:32.903 "claimed": true, 00:11:32.903 "claim_type": "exclusive_write", 00:11:32.903 "zoned": false, 00:11:32.903 "supported_io_types": { 00:11:32.903 "read": true, 00:11:32.903 "write": true, 00:11:32.903 "unmap": true, 00:11:32.903 "flush": true, 00:11:32.904 "reset": true, 00:11:32.904 "nvme_admin": false, 00:11:32.904 "nvme_io": false, 00:11:32.904 "nvme_io_md": false, 00:11:32.904 "write_zeroes": true, 00:11:32.904 "zcopy": true, 00:11:32.904 "get_zone_info": false, 00:11:32.904 "zone_management": false, 00:11:32.904 "zone_append": false, 00:11:32.904 "compare": false, 00:11:32.904 "compare_and_write": false, 00:11:32.904 "abort": true, 00:11:32.904 "seek_hole": false, 00:11:32.904 "seek_data": false, 00:11:32.904 "copy": true, 00:11:32.904 "nvme_iov_md": false 00:11:32.904 }, 00:11:32.904 "memory_domains": [ 00:11:32.904 { 00:11:32.904 "dma_device_id": "system", 00:11:32.904 "dma_device_type": 1 00:11:32.904 }, 00:11:32.904 { 00:11:32.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.904 "dma_device_type": 2 00:11:32.904 } 00:11:32.904 ], 00:11:32.904 "driver_specific": {} 00:11:32.904 } 00:11:32.904 ] 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.904 "name": "Existed_Raid", 00:11:32.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.904 "strip_size_kb": 64, 00:11:32.904 "state": "configuring", 00:11:32.904 "raid_level": "concat", 00:11:32.904 "superblock": false, 00:11:32.904 "num_base_bdevs": 4, 00:11:32.904 "num_base_bdevs_discovered": 1, 00:11:32.904 "num_base_bdevs_operational": 4, 00:11:32.904 "base_bdevs_list": [ 00:11:32.904 { 00:11:32.904 "name": "BaseBdev1", 00:11:32.904 "uuid": "d1b660a9-39c8-4db7-9b40-0dca4c1670d3", 00:11:32.904 "is_configured": true, 00:11:32.904 "data_offset": 0, 00:11:32.904 "data_size": 65536 00:11:32.904 }, 00:11:32.904 { 00:11:32.904 "name": "BaseBdev2", 00:11:32.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.904 "is_configured": false, 00:11:32.904 "data_offset": 0, 00:11:32.904 "data_size": 0 00:11:32.904 }, 00:11:32.904 { 00:11:32.904 "name": "BaseBdev3", 00:11:32.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.904 "is_configured": false, 00:11:32.904 "data_offset": 0, 00:11:32.904 "data_size": 0 00:11:32.904 }, 00:11:32.904 { 00:11:32.904 "name": "BaseBdev4", 00:11:32.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.904 "is_configured": false, 00:11:32.904 "data_offset": 0, 00:11:32.904 "data_size": 0 00:11:32.904 } 00:11:32.904 ] 00:11:32.904 }' 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.904 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.164 [2024-12-13 08:22:45.480324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.164 [2024-12-13 08:22:45.480381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.164 [2024-12-13 08:22:45.488373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.164 [2024-12-13 08:22:45.490485] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.164 [2024-12-13 08:22:45.490533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.164 [2024-12-13 08:22:45.490544] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.164 [2024-12-13 08:22:45.490556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.164 [2024-12-13 08:22:45.490564] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.164 [2024-12-13 08:22:45.490574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.164 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.422 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.422 "name": "Existed_Raid", 00:11:33.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.422 "strip_size_kb": 64, 00:11:33.422 "state": "configuring", 00:11:33.422 "raid_level": "concat", 00:11:33.422 "superblock": false, 00:11:33.422 "num_base_bdevs": 4, 00:11:33.422 "num_base_bdevs_discovered": 1, 00:11:33.422 "num_base_bdevs_operational": 4, 00:11:33.422 "base_bdevs_list": [ 00:11:33.422 { 00:11:33.422 "name": "BaseBdev1", 00:11:33.422 "uuid": "d1b660a9-39c8-4db7-9b40-0dca4c1670d3", 00:11:33.422 "is_configured": true, 00:11:33.422 "data_offset": 0, 00:11:33.422 "data_size": 65536 00:11:33.422 }, 00:11:33.422 { 00:11:33.422 "name": "BaseBdev2", 00:11:33.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.422 "is_configured": false, 00:11:33.422 "data_offset": 0, 00:11:33.422 "data_size": 0 00:11:33.422 }, 00:11:33.422 { 00:11:33.422 "name": "BaseBdev3", 00:11:33.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.422 "is_configured": false, 00:11:33.422 "data_offset": 0, 00:11:33.422 "data_size": 0 00:11:33.422 }, 00:11:33.422 { 00:11:33.422 "name": "BaseBdev4", 00:11:33.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.422 "is_configured": false, 00:11:33.422 "data_offset": 0, 00:11:33.422 "data_size": 0 00:11:33.422 } 00:11:33.422 ] 00:11:33.422 }' 00:11:33.422 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.422 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.681 [2024-12-13 08:22:45.949982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.681 BaseBdev2 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.681 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.681 [ 00:11:33.681 { 00:11:33.681 "name": "BaseBdev2", 00:11:33.681 "aliases": [ 00:11:33.681 "583395d1-d8e1-4ba3-88a1-81977c7221cc" 00:11:33.681 ], 00:11:33.681 "product_name": "Malloc disk", 00:11:33.681 "block_size": 512, 00:11:33.681 "num_blocks": 65536, 00:11:33.681 "uuid": "583395d1-d8e1-4ba3-88a1-81977c7221cc", 00:11:33.681 "assigned_rate_limits": { 00:11:33.681 "rw_ios_per_sec": 0, 00:11:33.681 "rw_mbytes_per_sec": 0, 00:11:33.681 "r_mbytes_per_sec": 0, 00:11:33.681 "w_mbytes_per_sec": 0 00:11:33.681 }, 00:11:33.681 "claimed": true, 00:11:33.681 "claim_type": "exclusive_write", 00:11:33.681 "zoned": false, 00:11:33.681 "supported_io_types": { 00:11:33.681 "read": true, 00:11:33.681 "write": true, 00:11:33.681 "unmap": true, 00:11:33.681 "flush": true, 00:11:33.681 "reset": true, 00:11:33.681 "nvme_admin": false, 00:11:33.681 "nvme_io": false, 00:11:33.681 "nvme_io_md": false, 00:11:33.682 "write_zeroes": true, 00:11:33.682 "zcopy": true, 00:11:33.682 "get_zone_info": false, 00:11:33.682 "zone_management": false, 00:11:33.682 "zone_append": false, 00:11:33.682 "compare": false, 00:11:33.682 "compare_and_write": false, 00:11:33.682 "abort": true, 00:11:33.682 "seek_hole": false, 00:11:33.682 "seek_data": false, 00:11:33.682 "copy": true, 00:11:33.682 "nvme_iov_md": false 00:11:33.682 }, 00:11:33.682 "memory_domains": [ 00:11:33.682 { 00:11:33.682 "dma_device_id": "system", 00:11:33.682 "dma_device_type": 1 00:11:33.682 }, 00:11:33.682 { 00:11:33.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.682 "dma_device_type": 2 00:11:33.682 } 00:11:33.682 ], 00:11:33.682 "driver_specific": {} 00:11:33.682 } 00:11:33.682 ] 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.682 08:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.682 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.682 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.682 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.941 "name": "Existed_Raid", 00:11:33.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.941 "strip_size_kb": 64, 00:11:33.941 "state": "configuring", 00:11:33.941 "raid_level": "concat", 00:11:33.941 "superblock": false, 00:11:33.941 "num_base_bdevs": 4, 00:11:33.941 "num_base_bdevs_discovered": 2, 00:11:33.941 "num_base_bdevs_operational": 4, 00:11:33.941 "base_bdevs_list": [ 00:11:33.941 { 00:11:33.941 "name": "BaseBdev1", 00:11:33.941 "uuid": "d1b660a9-39c8-4db7-9b40-0dca4c1670d3", 00:11:33.941 "is_configured": true, 00:11:33.941 "data_offset": 0, 00:11:33.941 "data_size": 65536 00:11:33.941 }, 00:11:33.941 { 00:11:33.941 "name": "BaseBdev2", 00:11:33.941 "uuid": "583395d1-d8e1-4ba3-88a1-81977c7221cc", 00:11:33.941 "is_configured": true, 00:11:33.941 "data_offset": 0, 00:11:33.941 "data_size": 65536 00:11:33.941 }, 00:11:33.941 { 00:11:33.941 "name": "BaseBdev3", 00:11:33.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.941 "is_configured": false, 00:11:33.941 "data_offset": 0, 00:11:33.941 "data_size": 0 00:11:33.941 }, 00:11:33.941 { 00:11:33.941 "name": "BaseBdev4", 00:11:33.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.941 "is_configured": false, 00:11:33.941 "data_offset": 0, 00:11:33.941 "data_size": 0 00:11:33.941 } 00:11:33.941 ] 00:11:33.941 }' 00:11:33.941 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.941 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.200 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:34.200 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.200 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.200 BaseBdev3 00:11:34.200 [2024-12-13 08:22:46.562484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.460 [ 00:11:34.460 { 00:11:34.460 "name": "BaseBdev3", 00:11:34.460 "aliases": [ 00:11:34.460 "6bbadce6-2a7b-4c02-a3d1-eb2a81193b26" 00:11:34.460 ], 00:11:34.460 "product_name": "Malloc disk", 00:11:34.460 "block_size": 512, 00:11:34.460 "num_blocks": 65536, 00:11:34.460 "uuid": "6bbadce6-2a7b-4c02-a3d1-eb2a81193b26", 00:11:34.460 "assigned_rate_limits": { 00:11:34.460 "rw_ios_per_sec": 0, 00:11:34.460 "rw_mbytes_per_sec": 0, 00:11:34.460 "r_mbytes_per_sec": 0, 00:11:34.460 "w_mbytes_per_sec": 0 00:11:34.460 }, 00:11:34.460 "claimed": true, 00:11:34.460 "claim_type": "exclusive_write", 00:11:34.460 "zoned": false, 00:11:34.460 "supported_io_types": { 00:11:34.460 "read": true, 00:11:34.460 "write": true, 00:11:34.460 "unmap": true, 00:11:34.460 "flush": true, 00:11:34.460 "reset": true, 00:11:34.460 "nvme_admin": false, 00:11:34.460 "nvme_io": false, 00:11:34.460 "nvme_io_md": false, 00:11:34.460 "write_zeroes": true, 00:11:34.460 "zcopy": true, 00:11:34.460 "get_zone_info": false, 00:11:34.460 "zone_management": false, 00:11:34.460 "zone_append": false, 00:11:34.460 "compare": false, 00:11:34.460 "compare_and_write": false, 00:11:34.460 "abort": true, 00:11:34.460 "seek_hole": false, 00:11:34.460 "seek_data": false, 00:11:34.460 "copy": true, 00:11:34.460 "nvme_iov_md": false 00:11:34.460 }, 00:11:34.460 "memory_domains": [ 00:11:34.460 { 00:11:34.460 "dma_device_id": "system", 00:11:34.460 "dma_device_type": 1 00:11:34.460 }, 00:11:34.460 { 00:11:34.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.460 "dma_device_type": 2 00:11:34.460 } 00:11:34.460 ], 00:11:34.460 "driver_specific": {} 00:11:34.460 } 00:11:34.460 ] 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.460 "name": "Existed_Raid", 00:11:34.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.460 "strip_size_kb": 64, 00:11:34.460 "state": "configuring", 00:11:34.460 "raid_level": "concat", 00:11:34.460 "superblock": false, 00:11:34.460 "num_base_bdevs": 4, 00:11:34.460 "num_base_bdevs_discovered": 3, 00:11:34.460 "num_base_bdevs_operational": 4, 00:11:34.460 "base_bdevs_list": [ 00:11:34.460 { 00:11:34.460 "name": "BaseBdev1", 00:11:34.460 "uuid": "d1b660a9-39c8-4db7-9b40-0dca4c1670d3", 00:11:34.460 "is_configured": true, 00:11:34.460 "data_offset": 0, 00:11:34.460 "data_size": 65536 00:11:34.460 }, 00:11:34.460 { 00:11:34.460 "name": "BaseBdev2", 00:11:34.460 "uuid": "583395d1-d8e1-4ba3-88a1-81977c7221cc", 00:11:34.460 "is_configured": true, 00:11:34.460 "data_offset": 0, 00:11:34.460 "data_size": 65536 00:11:34.460 }, 00:11:34.460 { 00:11:34.460 "name": "BaseBdev3", 00:11:34.460 "uuid": "6bbadce6-2a7b-4c02-a3d1-eb2a81193b26", 00:11:34.460 "is_configured": true, 00:11:34.460 "data_offset": 0, 00:11:34.460 "data_size": 65536 00:11:34.460 }, 00:11:34.460 { 00:11:34.460 "name": "BaseBdev4", 00:11:34.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.460 "is_configured": false, 00:11:34.460 "data_offset": 0, 00:11:34.460 "data_size": 0 00:11:34.460 } 00:11:34.460 ] 00:11:34.460 }' 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.460 08:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.720 [2024-12-13 08:22:47.068147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:34.720 [2024-12-13 08:22:47.068296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.720 [2024-12-13 08:22:47.068327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:34.720 [2024-12-13 08:22:47.068677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.720 [2024-12-13 08:22:47.068906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.720 [2024-12-13 08:22:47.068957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:34.720 [2024-12-13 08:22:47.069306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.720 BaseBdev4 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.720 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.979 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.979 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:34.979 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.979 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.979 [ 00:11:34.979 { 00:11:34.979 "name": "BaseBdev4", 00:11:34.979 "aliases": [ 00:11:34.980 "634bb06a-3d27-454f-a44c-8eab49aaefdf" 00:11:34.980 ], 00:11:34.980 "product_name": "Malloc disk", 00:11:34.980 "block_size": 512, 00:11:34.980 "num_blocks": 65536, 00:11:34.980 "uuid": "634bb06a-3d27-454f-a44c-8eab49aaefdf", 00:11:34.980 "assigned_rate_limits": { 00:11:34.980 "rw_ios_per_sec": 0, 00:11:34.980 "rw_mbytes_per_sec": 0, 00:11:34.980 "r_mbytes_per_sec": 0, 00:11:34.980 "w_mbytes_per_sec": 0 00:11:34.980 }, 00:11:34.980 "claimed": true, 00:11:34.980 "claim_type": "exclusive_write", 00:11:34.980 "zoned": false, 00:11:34.980 "supported_io_types": { 00:11:34.980 "read": true, 00:11:34.980 "write": true, 00:11:34.980 "unmap": true, 00:11:34.980 "flush": true, 00:11:34.980 "reset": true, 00:11:34.980 "nvme_admin": false, 00:11:34.980 "nvme_io": false, 00:11:34.980 "nvme_io_md": false, 00:11:34.980 "write_zeroes": true, 00:11:34.980 "zcopy": true, 00:11:34.980 "get_zone_info": false, 00:11:34.980 "zone_management": false, 00:11:34.980 "zone_append": false, 00:11:34.980 "compare": false, 00:11:34.980 "compare_and_write": false, 00:11:34.980 "abort": true, 00:11:34.980 "seek_hole": false, 00:11:34.980 "seek_data": false, 00:11:34.980 "copy": true, 00:11:34.980 "nvme_iov_md": false 00:11:34.980 }, 00:11:34.980 "memory_domains": [ 00:11:34.980 { 00:11:34.980 "dma_device_id": "system", 00:11:34.980 "dma_device_type": 1 00:11:34.980 }, 00:11:34.980 { 00:11:34.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.980 "dma_device_type": 2 00:11:34.980 } 00:11:34.980 ], 00:11:34.980 "driver_specific": {} 00:11:34.980 } 00:11:34.980 ] 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.980 "name": "Existed_Raid", 00:11:34.980 "uuid": "d2826b23-0203-4f59-900f-437922aeef19", 00:11:34.980 "strip_size_kb": 64, 00:11:34.980 "state": "online", 00:11:34.980 "raid_level": "concat", 00:11:34.980 "superblock": false, 00:11:34.980 "num_base_bdevs": 4, 00:11:34.980 "num_base_bdevs_discovered": 4, 00:11:34.980 "num_base_bdevs_operational": 4, 00:11:34.980 "base_bdevs_list": [ 00:11:34.980 { 00:11:34.980 "name": "BaseBdev1", 00:11:34.980 "uuid": "d1b660a9-39c8-4db7-9b40-0dca4c1670d3", 00:11:34.980 "is_configured": true, 00:11:34.980 "data_offset": 0, 00:11:34.980 "data_size": 65536 00:11:34.980 }, 00:11:34.980 { 00:11:34.980 "name": "BaseBdev2", 00:11:34.980 "uuid": "583395d1-d8e1-4ba3-88a1-81977c7221cc", 00:11:34.980 "is_configured": true, 00:11:34.980 "data_offset": 0, 00:11:34.980 "data_size": 65536 00:11:34.980 }, 00:11:34.980 { 00:11:34.980 "name": "BaseBdev3", 00:11:34.980 "uuid": "6bbadce6-2a7b-4c02-a3d1-eb2a81193b26", 00:11:34.980 "is_configured": true, 00:11:34.980 "data_offset": 0, 00:11:34.980 "data_size": 65536 00:11:34.980 }, 00:11:34.980 { 00:11:34.980 "name": "BaseBdev4", 00:11:34.980 "uuid": "634bb06a-3d27-454f-a44c-8eab49aaefdf", 00:11:34.980 "is_configured": true, 00:11:34.980 "data_offset": 0, 00:11:34.980 "data_size": 65536 00:11:34.980 } 00:11:34.980 ] 00:11:34.980 }' 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.980 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.240 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.499 [2024-12-13 08:22:47.603682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.500 "name": "Existed_Raid", 00:11:35.500 "aliases": [ 00:11:35.500 "d2826b23-0203-4f59-900f-437922aeef19" 00:11:35.500 ], 00:11:35.500 "product_name": "Raid Volume", 00:11:35.500 "block_size": 512, 00:11:35.500 "num_blocks": 262144, 00:11:35.500 "uuid": "d2826b23-0203-4f59-900f-437922aeef19", 00:11:35.500 "assigned_rate_limits": { 00:11:35.500 "rw_ios_per_sec": 0, 00:11:35.500 "rw_mbytes_per_sec": 0, 00:11:35.500 "r_mbytes_per_sec": 0, 00:11:35.500 "w_mbytes_per_sec": 0 00:11:35.500 }, 00:11:35.500 "claimed": false, 00:11:35.500 "zoned": false, 00:11:35.500 "supported_io_types": { 00:11:35.500 "read": true, 00:11:35.500 "write": true, 00:11:35.500 "unmap": true, 00:11:35.500 "flush": true, 00:11:35.500 "reset": true, 00:11:35.500 "nvme_admin": false, 00:11:35.500 "nvme_io": false, 00:11:35.500 "nvme_io_md": false, 00:11:35.500 "write_zeroes": true, 00:11:35.500 "zcopy": false, 00:11:35.500 "get_zone_info": false, 00:11:35.500 "zone_management": false, 00:11:35.500 "zone_append": false, 00:11:35.500 "compare": false, 00:11:35.500 "compare_and_write": false, 00:11:35.500 "abort": false, 00:11:35.500 "seek_hole": false, 00:11:35.500 "seek_data": false, 00:11:35.500 "copy": false, 00:11:35.500 "nvme_iov_md": false 00:11:35.500 }, 00:11:35.500 "memory_domains": [ 00:11:35.500 { 00:11:35.500 "dma_device_id": "system", 00:11:35.500 "dma_device_type": 1 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.500 "dma_device_type": 2 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "dma_device_id": "system", 00:11:35.500 "dma_device_type": 1 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.500 "dma_device_type": 2 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "dma_device_id": "system", 00:11:35.500 "dma_device_type": 1 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.500 "dma_device_type": 2 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "dma_device_id": "system", 00:11:35.500 "dma_device_type": 1 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.500 "dma_device_type": 2 00:11:35.500 } 00:11:35.500 ], 00:11:35.500 "driver_specific": { 00:11:35.500 "raid": { 00:11:35.500 "uuid": "d2826b23-0203-4f59-900f-437922aeef19", 00:11:35.500 "strip_size_kb": 64, 00:11:35.500 "state": "online", 00:11:35.500 "raid_level": "concat", 00:11:35.500 "superblock": false, 00:11:35.500 "num_base_bdevs": 4, 00:11:35.500 "num_base_bdevs_discovered": 4, 00:11:35.500 "num_base_bdevs_operational": 4, 00:11:35.500 "base_bdevs_list": [ 00:11:35.500 { 00:11:35.500 "name": "BaseBdev1", 00:11:35.500 "uuid": "d1b660a9-39c8-4db7-9b40-0dca4c1670d3", 00:11:35.500 "is_configured": true, 00:11:35.500 "data_offset": 0, 00:11:35.500 "data_size": 65536 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "name": "BaseBdev2", 00:11:35.500 "uuid": "583395d1-d8e1-4ba3-88a1-81977c7221cc", 00:11:35.500 "is_configured": true, 00:11:35.500 "data_offset": 0, 00:11:35.500 "data_size": 65536 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "name": "BaseBdev3", 00:11:35.500 "uuid": "6bbadce6-2a7b-4c02-a3d1-eb2a81193b26", 00:11:35.500 "is_configured": true, 00:11:35.500 "data_offset": 0, 00:11:35.500 "data_size": 65536 00:11:35.500 }, 00:11:35.500 { 00:11:35.500 "name": "BaseBdev4", 00:11:35.500 "uuid": "634bb06a-3d27-454f-a44c-8eab49aaefdf", 00:11:35.500 "is_configured": true, 00:11:35.500 "data_offset": 0, 00:11:35.500 "data_size": 65536 00:11:35.500 } 00:11:35.500 ] 00:11:35.500 } 00:11:35.500 } 00:11:35.500 }' 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:35.500 BaseBdev2 00:11:35.500 BaseBdev3 00:11:35.500 BaseBdev4' 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.500 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.501 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.760 08:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.760 [2024-12-13 08:22:47.919012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.760 [2024-12-13 08:22:47.919048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.760 [2024-12-13 08:22:47.919118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.760 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.760 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:35.760 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:35.760 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.761 "name": "Existed_Raid", 00:11:35.761 "uuid": "d2826b23-0203-4f59-900f-437922aeef19", 00:11:35.761 "strip_size_kb": 64, 00:11:35.761 "state": "offline", 00:11:35.761 "raid_level": "concat", 00:11:35.761 "superblock": false, 00:11:35.761 "num_base_bdevs": 4, 00:11:35.761 "num_base_bdevs_discovered": 3, 00:11:35.761 "num_base_bdevs_operational": 3, 00:11:35.761 "base_bdevs_list": [ 00:11:35.761 { 00:11:35.761 "name": null, 00:11:35.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.761 "is_configured": false, 00:11:35.761 "data_offset": 0, 00:11:35.761 "data_size": 65536 00:11:35.761 }, 00:11:35.761 { 00:11:35.761 "name": "BaseBdev2", 00:11:35.761 "uuid": "583395d1-d8e1-4ba3-88a1-81977c7221cc", 00:11:35.761 "is_configured": true, 00:11:35.761 "data_offset": 0, 00:11:35.761 "data_size": 65536 00:11:35.761 }, 00:11:35.761 { 00:11:35.761 "name": "BaseBdev3", 00:11:35.761 "uuid": "6bbadce6-2a7b-4c02-a3d1-eb2a81193b26", 00:11:35.761 "is_configured": true, 00:11:35.761 "data_offset": 0, 00:11:35.761 "data_size": 65536 00:11:35.761 }, 00:11:35.761 { 00:11:35.761 "name": "BaseBdev4", 00:11:35.761 "uuid": "634bb06a-3d27-454f-a44c-8eab49aaefdf", 00:11:35.761 "is_configured": true, 00:11:35.761 "data_offset": 0, 00:11:35.761 "data_size": 65536 00:11:35.761 } 00:11:35.761 ] 00:11:35.761 }' 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.761 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.327 [2024-12-13 08:22:48.547084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.327 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.328 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.328 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.328 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.328 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.586 [2024-12-13 08:22:48.718749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.586 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.586 [2024-12-13 08:22:48.886415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:36.586 [2024-12-13 08:22:48.886522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:36.844 08:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.844 08:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:36.844 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 BaseBdev2 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 [ 00:11:36.845 { 00:11:36.845 "name": "BaseBdev2", 00:11:36.845 "aliases": [ 00:11:36.845 "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4" 00:11:36.845 ], 00:11:36.845 "product_name": "Malloc disk", 00:11:36.845 "block_size": 512, 00:11:36.845 "num_blocks": 65536, 00:11:36.845 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:36.845 "assigned_rate_limits": { 00:11:36.845 "rw_ios_per_sec": 0, 00:11:36.845 "rw_mbytes_per_sec": 0, 00:11:36.845 "r_mbytes_per_sec": 0, 00:11:36.845 "w_mbytes_per_sec": 0 00:11:36.845 }, 00:11:36.845 "claimed": false, 00:11:36.845 "zoned": false, 00:11:36.845 "supported_io_types": { 00:11:36.845 "read": true, 00:11:36.845 "write": true, 00:11:36.845 "unmap": true, 00:11:36.845 "flush": true, 00:11:36.845 "reset": true, 00:11:36.845 "nvme_admin": false, 00:11:36.845 "nvme_io": false, 00:11:36.845 "nvme_io_md": false, 00:11:36.845 "write_zeroes": true, 00:11:36.845 "zcopy": true, 00:11:36.845 "get_zone_info": false, 00:11:36.845 "zone_management": false, 00:11:36.845 "zone_append": false, 00:11:36.845 "compare": false, 00:11:36.845 "compare_and_write": false, 00:11:36.845 "abort": true, 00:11:36.845 "seek_hole": false, 00:11:36.845 "seek_data": false, 00:11:36.845 "copy": true, 00:11:36.845 "nvme_iov_md": false 00:11:36.845 }, 00:11:36.845 "memory_domains": [ 00:11:36.845 { 00:11:36.845 "dma_device_id": "system", 00:11:36.845 "dma_device_type": 1 00:11:36.845 }, 00:11:36.845 { 00:11:36.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.845 "dma_device_type": 2 00:11:36.845 } 00:11:36.845 ], 00:11:36.845 "driver_specific": {} 00:11:36.845 } 00:11:36.845 ] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 BaseBdev3 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.845 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.103 [ 00:11:37.103 { 00:11:37.103 "name": "BaseBdev3", 00:11:37.103 "aliases": [ 00:11:37.103 "645e0e85-8621-404f-bf04-af04a2119032" 00:11:37.103 ], 00:11:37.103 "product_name": "Malloc disk", 00:11:37.103 "block_size": 512, 00:11:37.103 "num_blocks": 65536, 00:11:37.103 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:37.103 "assigned_rate_limits": { 00:11:37.103 "rw_ios_per_sec": 0, 00:11:37.103 "rw_mbytes_per_sec": 0, 00:11:37.103 "r_mbytes_per_sec": 0, 00:11:37.103 "w_mbytes_per_sec": 0 00:11:37.103 }, 00:11:37.103 "claimed": false, 00:11:37.103 "zoned": false, 00:11:37.103 "supported_io_types": { 00:11:37.103 "read": true, 00:11:37.103 "write": true, 00:11:37.103 "unmap": true, 00:11:37.103 "flush": true, 00:11:37.103 "reset": true, 00:11:37.103 "nvme_admin": false, 00:11:37.103 "nvme_io": false, 00:11:37.103 "nvme_io_md": false, 00:11:37.103 "write_zeroes": true, 00:11:37.104 "zcopy": true, 00:11:37.104 "get_zone_info": false, 00:11:37.104 "zone_management": false, 00:11:37.104 "zone_append": false, 00:11:37.104 "compare": false, 00:11:37.104 "compare_and_write": false, 00:11:37.104 "abort": true, 00:11:37.104 "seek_hole": false, 00:11:37.104 "seek_data": false, 00:11:37.104 "copy": true, 00:11:37.104 "nvme_iov_md": false 00:11:37.104 }, 00:11:37.104 "memory_domains": [ 00:11:37.104 { 00:11:37.104 "dma_device_id": "system", 00:11:37.104 "dma_device_type": 1 00:11:37.104 }, 00:11:37.104 { 00:11:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.104 "dma_device_type": 2 00:11:37.104 } 00:11:37.104 ], 00:11:37.104 "driver_specific": {} 00:11:37.104 } 00:11:37.104 ] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 BaseBdev4 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 [ 00:11:37.104 { 00:11:37.104 "name": "BaseBdev4", 00:11:37.104 "aliases": [ 00:11:37.104 "07a1e255-cee8-43da-b7ba-490209bd0909" 00:11:37.104 ], 00:11:37.104 "product_name": "Malloc disk", 00:11:37.104 "block_size": 512, 00:11:37.104 "num_blocks": 65536, 00:11:37.104 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:37.104 "assigned_rate_limits": { 00:11:37.104 "rw_ios_per_sec": 0, 00:11:37.104 "rw_mbytes_per_sec": 0, 00:11:37.104 "r_mbytes_per_sec": 0, 00:11:37.104 "w_mbytes_per_sec": 0 00:11:37.104 }, 00:11:37.104 "claimed": false, 00:11:37.104 "zoned": false, 00:11:37.104 "supported_io_types": { 00:11:37.104 "read": true, 00:11:37.104 "write": true, 00:11:37.104 "unmap": true, 00:11:37.104 "flush": true, 00:11:37.104 "reset": true, 00:11:37.104 "nvme_admin": false, 00:11:37.104 "nvme_io": false, 00:11:37.104 "nvme_io_md": false, 00:11:37.104 "write_zeroes": true, 00:11:37.104 "zcopy": true, 00:11:37.104 "get_zone_info": false, 00:11:37.104 "zone_management": false, 00:11:37.104 "zone_append": false, 00:11:37.104 "compare": false, 00:11:37.104 "compare_and_write": false, 00:11:37.104 "abort": true, 00:11:37.104 "seek_hole": false, 00:11:37.104 "seek_data": false, 00:11:37.104 "copy": true, 00:11:37.104 "nvme_iov_md": false 00:11:37.104 }, 00:11:37.104 "memory_domains": [ 00:11:37.104 { 00:11:37.104 "dma_device_id": "system", 00:11:37.104 "dma_device_type": 1 00:11:37.104 }, 00:11:37.104 { 00:11:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.104 "dma_device_type": 2 00:11:37.104 } 00:11:37.104 ], 00:11:37.104 "driver_specific": {} 00:11:37.104 } 00:11:37.104 ] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 [2024-12-13 08:22:49.317468] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.104 [2024-12-13 08:22:49.317612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.104 [2024-12-13 08:22:49.317695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.104 [2024-12-13 08:22:49.319947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.104 [2024-12-13 08:22:49.320077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.104 "name": "Existed_Raid", 00:11:37.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.104 "strip_size_kb": 64, 00:11:37.104 "state": "configuring", 00:11:37.104 "raid_level": "concat", 00:11:37.104 "superblock": false, 00:11:37.104 "num_base_bdevs": 4, 00:11:37.104 "num_base_bdevs_discovered": 3, 00:11:37.104 "num_base_bdevs_operational": 4, 00:11:37.104 "base_bdevs_list": [ 00:11:37.104 { 00:11:37.104 "name": "BaseBdev1", 00:11:37.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.104 "is_configured": false, 00:11:37.104 "data_offset": 0, 00:11:37.104 "data_size": 0 00:11:37.104 }, 00:11:37.104 { 00:11:37.104 "name": "BaseBdev2", 00:11:37.104 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:37.104 "is_configured": true, 00:11:37.104 "data_offset": 0, 00:11:37.104 "data_size": 65536 00:11:37.104 }, 00:11:37.104 { 00:11:37.104 "name": "BaseBdev3", 00:11:37.104 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:37.104 "is_configured": true, 00:11:37.104 "data_offset": 0, 00:11:37.104 "data_size": 65536 00:11:37.104 }, 00:11:37.104 { 00:11:37.104 "name": "BaseBdev4", 00:11:37.104 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:37.104 "is_configured": true, 00:11:37.104 "data_offset": 0, 00:11:37.104 "data_size": 65536 00:11:37.104 } 00:11:37.104 ] 00:11:37.104 }' 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.104 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 [2024-12-13 08:22:49.780675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.672 "name": "Existed_Raid", 00:11:37.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.672 "strip_size_kb": 64, 00:11:37.672 "state": "configuring", 00:11:37.672 "raid_level": "concat", 00:11:37.672 "superblock": false, 00:11:37.672 "num_base_bdevs": 4, 00:11:37.672 "num_base_bdevs_discovered": 2, 00:11:37.672 "num_base_bdevs_operational": 4, 00:11:37.672 "base_bdevs_list": [ 00:11:37.672 { 00:11:37.672 "name": "BaseBdev1", 00:11:37.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.672 "is_configured": false, 00:11:37.672 "data_offset": 0, 00:11:37.672 "data_size": 0 00:11:37.672 }, 00:11:37.672 { 00:11:37.672 "name": null, 00:11:37.672 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:37.672 "is_configured": false, 00:11:37.672 "data_offset": 0, 00:11:37.672 "data_size": 65536 00:11:37.672 }, 00:11:37.672 { 00:11:37.672 "name": "BaseBdev3", 00:11:37.672 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:37.672 "is_configured": true, 00:11:37.672 "data_offset": 0, 00:11:37.672 "data_size": 65536 00:11:37.672 }, 00:11:37.672 { 00:11:37.672 "name": "BaseBdev4", 00:11:37.672 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:37.672 "is_configured": true, 00:11:37.672 "data_offset": 0, 00:11:37.672 "data_size": 65536 00:11:37.672 } 00:11:37.672 ] 00:11:37.672 }' 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.672 08:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.931 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.931 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:37.931 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.931 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.190 [2024-12-13 08:22:50.366234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.190 BaseBdev1 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.190 [ 00:11:38.190 { 00:11:38.190 "name": "BaseBdev1", 00:11:38.190 "aliases": [ 00:11:38.190 "d52508cc-9ac2-4c85-8b4d-44e6c79a3261" 00:11:38.190 ], 00:11:38.190 "product_name": "Malloc disk", 00:11:38.190 "block_size": 512, 00:11:38.190 "num_blocks": 65536, 00:11:38.190 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:38.190 "assigned_rate_limits": { 00:11:38.190 "rw_ios_per_sec": 0, 00:11:38.190 "rw_mbytes_per_sec": 0, 00:11:38.190 "r_mbytes_per_sec": 0, 00:11:38.190 "w_mbytes_per_sec": 0 00:11:38.190 }, 00:11:38.190 "claimed": true, 00:11:38.190 "claim_type": "exclusive_write", 00:11:38.190 "zoned": false, 00:11:38.190 "supported_io_types": { 00:11:38.190 "read": true, 00:11:38.190 "write": true, 00:11:38.190 "unmap": true, 00:11:38.190 "flush": true, 00:11:38.190 "reset": true, 00:11:38.190 "nvme_admin": false, 00:11:38.190 "nvme_io": false, 00:11:38.190 "nvme_io_md": false, 00:11:38.190 "write_zeroes": true, 00:11:38.190 "zcopy": true, 00:11:38.190 "get_zone_info": false, 00:11:38.190 "zone_management": false, 00:11:38.190 "zone_append": false, 00:11:38.190 "compare": false, 00:11:38.190 "compare_and_write": false, 00:11:38.190 "abort": true, 00:11:38.190 "seek_hole": false, 00:11:38.190 "seek_data": false, 00:11:38.190 "copy": true, 00:11:38.190 "nvme_iov_md": false 00:11:38.190 }, 00:11:38.190 "memory_domains": [ 00:11:38.190 { 00:11:38.190 "dma_device_id": "system", 00:11:38.190 "dma_device_type": 1 00:11:38.190 }, 00:11:38.190 { 00:11:38.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.190 "dma_device_type": 2 00:11:38.190 } 00:11:38.190 ], 00:11:38.190 "driver_specific": {} 00:11:38.190 } 00:11:38.190 ] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.190 "name": "Existed_Raid", 00:11:38.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.190 "strip_size_kb": 64, 00:11:38.190 "state": "configuring", 00:11:38.190 "raid_level": "concat", 00:11:38.190 "superblock": false, 00:11:38.190 "num_base_bdevs": 4, 00:11:38.190 "num_base_bdevs_discovered": 3, 00:11:38.190 "num_base_bdevs_operational": 4, 00:11:38.190 "base_bdevs_list": [ 00:11:38.190 { 00:11:38.190 "name": "BaseBdev1", 00:11:38.190 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:38.190 "is_configured": true, 00:11:38.190 "data_offset": 0, 00:11:38.190 "data_size": 65536 00:11:38.190 }, 00:11:38.190 { 00:11:38.190 "name": null, 00:11:38.190 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:38.190 "is_configured": false, 00:11:38.190 "data_offset": 0, 00:11:38.190 "data_size": 65536 00:11:38.190 }, 00:11:38.190 { 00:11:38.190 "name": "BaseBdev3", 00:11:38.190 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:38.190 "is_configured": true, 00:11:38.190 "data_offset": 0, 00:11:38.190 "data_size": 65536 00:11:38.190 }, 00:11:38.190 { 00:11:38.190 "name": "BaseBdev4", 00:11:38.190 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:38.190 "is_configured": true, 00:11:38.190 "data_offset": 0, 00:11:38.190 "data_size": 65536 00:11:38.190 } 00:11:38.190 ] 00:11:38.190 }' 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.190 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.758 [2024-12-13 08:22:50.945378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.758 08:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.758 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.758 "name": "Existed_Raid", 00:11:38.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.758 "strip_size_kb": 64, 00:11:38.758 "state": "configuring", 00:11:38.758 "raid_level": "concat", 00:11:38.758 "superblock": false, 00:11:38.758 "num_base_bdevs": 4, 00:11:38.758 "num_base_bdevs_discovered": 2, 00:11:38.758 "num_base_bdevs_operational": 4, 00:11:38.758 "base_bdevs_list": [ 00:11:38.758 { 00:11:38.758 "name": "BaseBdev1", 00:11:38.758 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:38.758 "is_configured": true, 00:11:38.758 "data_offset": 0, 00:11:38.758 "data_size": 65536 00:11:38.758 }, 00:11:38.758 { 00:11:38.758 "name": null, 00:11:38.758 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:38.758 "is_configured": false, 00:11:38.758 "data_offset": 0, 00:11:38.758 "data_size": 65536 00:11:38.758 }, 00:11:38.758 { 00:11:38.758 "name": null, 00:11:38.758 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:38.758 "is_configured": false, 00:11:38.758 "data_offset": 0, 00:11:38.758 "data_size": 65536 00:11:38.758 }, 00:11:38.758 { 00:11:38.758 "name": "BaseBdev4", 00:11:38.758 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:38.758 "is_configured": true, 00:11:38.758 "data_offset": 0, 00:11:38.758 "data_size": 65536 00:11:38.758 } 00:11:38.758 ] 00:11:38.758 }' 00:11:38.758 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.758 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.017 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.017 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.017 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.017 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.017 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.276 [2024-12-13 08:22:51.412635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.276 "name": "Existed_Raid", 00:11:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.276 "strip_size_kb": 64, 00:11:39.276 "state": "configuring", 00:11:39.276 "raid_level": "concat", 00:11:39.276 "superblock": false, 00:11:39.276 "num_base_bdevs": 4, 00:11:39.276 "num_base_bdevs_discovered": 3, 00:11:39.276 "num_base_bdevs_operational": 4, 00:11:39.276 "base_bdevs_list": [ 00:11:39.276 { 00:11:39.276 "name": "BaseBdev1", 00:11:39.276 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:39.276 "is_configured": true, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "name": null, 00:11:39.276 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:39.276 "is_configured": false, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "name": "BaseBdev3", 00:11:39.276 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:39.276 "is_configured": true, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "name": "BaseBdev4", 00:11:39.276 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:39.276 "is_configured": true, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 } 00:11:39.276 ] 00:11:39.276 }' 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.276 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.535 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.535 [2024-12-13 08:22:51.887906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.794 08:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.794 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.794 08:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.794 "name": "Existed_Raid", 00:11:39.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.794 "strip_size_kb": 64, 00:11:39.794 "state": "configuring", 00:11:39.794 "raid_level": "concat", 00:11:39.794 "superblock": false, 00:11:39.794 "num_base_bdevs": 4, 00:11:39.794 "num_base_bdevs_discovered": 2, 00:11:39.794 "num_base_bdevs_operational": 4, 00:11:39.794 "base_bdevs_list": [ 00:11:39.794 { 00:11:39.794 "name": null, 00:11:39.794 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:39.794 "is_configured": false, 00:11:39.794 "data_offset": 0, 00:11:39.794 "data_size": 65536 00:11:39.794 }, 00:11:39.794 { 00:11:39.794 "name": null, 00:11:39.794 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:39.794 "is_configured": false, 00:11:39.794 "data_offset": 0, 00:11:39.794 "data_size": 65536 00:11:39.794 }, 00:11:39.794 { 00:11:39.794 "name": "BaseBdev3", 00:11:39.794 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:39.794 "is_configured": true, 00:11:39.794 "data_offset": 0, 00:11:39.794 "data_size": 65536 00:11:39.794 }, 00:11:39.794 { 00:11:39.794 "name": "BaseBdev4", 00:11:39.794 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:39.794 "is_configured": true, 00:11:39.794 "data_offset": 0, 00:11:39.794 "data_size": 65536 00:11:39.794 } 00:11:39.794 ] 00:11:39.794 }' 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.794 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 [2024-12-13 08:22:52.504890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.362 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.362 "name": "Existed_Raid", 00:11:40.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.362 "strip_size_kb": 64, 00:11:40.362 "state": "configuring", 00:11:40.362 "raid_level": "concat", 00:11:40.362 "superblock": false, 00:11:40.362 "num_base_bdevs": 4, 00:11:40.362 "num_base_bdevs_discovered": 3, 00:11:40.362 "num_base_bdevs_operational": 4, 00:11:40.362 "base_bdevs_list": [ 00:11:40.362 { 00:11:40.362 "name": null, 00:11:40.362 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:40.362 "is_configured": false, 00:11:40.362 "data_offset": 0, 00:11:40.362 "data_size": 65536 00:11:40.362 }, 00:11:40.362 { 00:11:40.362 "name": "BaseBdev2", 00:11:40.362 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:40.362 "is_configured": true, 00:11:40.362 "data_offset": 0, 00:11:40.362 "data_size": 65536 00:11:40.362 }, 00:11:40.362 { 00:11:40.362 "name": "BaseBdev3", 00:11:40.362 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:40.362 "is_configured": true, 00:11:40.363 "data_offset": 0, 00:11:40.363 "data_size": 65536 00:11:40.363 }, 00:11:40.363 { 00:11:40.363 "name": "BaseBdev4", 00:11:40.363 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:40.363 "is_configured": true, 00:11:40.363 "data_offset": 0, 00:11:40.363 "data_size": 65536 00:11:40.363 } 00:11:40.363 ] 00:11:40.363 }' 00:11:40.363 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.363 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.621 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.621 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.621 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.621 08:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.621 08:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d52508cc-9ac2-4c85-8b4d-44e6c79a3261 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.881 [2024-12-13 08:22:53.099939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:40.881 [2024-12-13 08:22:53.099996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.881 [2024-12-13 08:22:53.100004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:40.881 [2024-12-13 08:22:53.100321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:40.881 [2024-12-13 08:22:53.100476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.881 [2024-12-13 08:22:53.100495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:40.881 [2024-12-13 08:22:53.100827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.881 NewBaseBdev 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.881 [ 00:11:40.881 { 00:11:40.881 "name": "NewBaseBdev", 00:11:40.881 "aliases": [ 00:11:40.881 "d52508cc-9ac2-4c85-8b4d-44e6c79a3261" 00:11:40.881 ], 00:11:40.881 "product_name": "Malloc disk", 00:11:40.881 "block_size": 512, 00:11:40.881 "num_blocks": 65536, 00:11:40.881 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:40.881 "assigned_rate_limits": { 00:11:40.881 "rw_ios_per_sec": 0, 00:11:40.881 "rw_mbytes_per_sec": 0, 00:11:40.881 "r_mbytes_per_sec": 0, 00:11:40.881 "w_mbytes_per_sec": 0 00:11:40.881 }, 00:11:40.881 "claimed": true, 00:11:40.881 "claim_type": "exclusive_write", 00:11:40.881 "zoned": false, 00:11:40.881 "supported_io_types": { 00:11:40.881 "read": true, 00:11:40.881 "write": true, 00:11:40.881 "unmap": true, 00:11:40.881 "flush": true, 00:11:40.881 "reset": true, 00:11:40.881 "nvme_admin": false, 00:11:40.881 "nvme_io": false, 00:11:40.881 "nvme_io_md": false, 00:11:40.881 "write_zeroes": true, 00:11:40.881 "zcopy": true, 00:11:40.881 "get_zone_info": false, 00:11:40.881 "zone_management": false, 00:11:40.881 "zone_append": false, 00:11:40.881 "compare": false, 00:11:40.881 "compare_and_write": false, 00:11:40.881 "abort": true, 00:11:40.881 "seek_hole": false, 00:11:40.881 "seek_data": false, 00:11:40.881 "copy": true, 00:11:40.881 "nvme_iov_md": false 00:11:40.881 }, 00:11:40.881 "memory_domains": [ 00:11:40.881 { 00:11:40.881 "dma_device_id": "system", 00:11:40.881 "dma_device_type": 1 00:11:40.881 }, 00:11:40.881 { 00:11:40.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.881 "dma_device_type": 2 00:11:40.881 } 00:11:40.881 ], 00:11:40.881 "driver_specific": {} 00:11:40.881 } 00:11:40.881 ] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.881 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.882 "name": "Existed_Raid", 00:11:40.882 "uuid": "f41f50bc-87ee-49cd-ae01-b973e0f2649f", 00:11:40.882 "strip_size_kb": 64, 00:11:40.882 "state": "online", 00:11:40.882 "raid_level": "concat", 00:11:40.882 "superblock": false, 00:11:40.882 "num_base_bdevs": 4, 00:11:40.882 "num_base_bdevs_discovered": 4, 00:11:40.882 "num_base_bdevs_operational": 4, 00:11:40.882 "base_bdevs_list": [ 00:11:40.882 { 00:11:40.882 "name": "NewBaseBdev", 00:11:40.882 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:40.882 "is_configured": true, 00:11:40.882 "data_offset": 0, 00:11:40.882 "data_size": 65536 00:11:40.882 }, 00:11:40.882 { 00:11:40.882 "name": "BaseBdev2", 00:11:40.882 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:40.882 "is_configured": true, 00:11:40.882 "data_offset": 0, 00:11:40.882 "data_size": 65536 00:11:40.882 }, 00:11:40.882 { 00:11:40.882 "name": "BaseBdev3", 00:11:40.882 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:40.882 "is_configured": true, 00:11:40.882 "data_offset": 0, 00:11:40.882 "data_size": 65536 00:11:40.882 }, 00:11:40.882 { 00:11:40.882 "name": "BaseBdev4", 00:11:40.882 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:40.882 "is_configured": true, 00:11:40.882 "data_offset": 0, 00:11:40.882 "data_size": 65536 00:11:40.882 } 00:11:40.882 ] 00:11:40.882 }' 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.882 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.450 [2024-12-13 08:22:53.599605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.450 "name": "Existed_Raid", 00:11:41.450 "aliases": [ 00:11:41.450 "f41f50bc-87ee-49cd-ae01-b973e0f2649f" 00:11:41.450 ], 00:11:41.450 "product_name": "Raid Volume", 00:11:41.450 "block_size": 512, 00:11:41.450 "num_blocks": 262144, 00:11:41.450 "uuid": "f41f50bc-87ee-49cd-ae01-b973e0f2649f", 00:11:41.450 "assigned_rate_limits": { 00:11:41.450 "rw_ios_per_sec": 0, 00:11:41.450 "rw_mbytes_per_sec": 0, 00:11:41.450 "r_mbytes_per_sec": 0, 00:11:41.450 "w_mbytes_per_sec": 0 00:11:41.450 }, 00:11:41.450 "claimed": false, 00:11:41.450 "zoned": false, 00:11:41.450 "supported_io_types": { 00:11:41.450 "read": true, 00:11:41.450 "write": true, 00:11:41.450 "unmap": true, 00:11:41.450 "flush": true, 00:11:41.450 "reset": true, 00:11:41.450 "nvme_admin": false, 00:11:41.450 "nvme_io": false, 00:11:41.450 "nvme_io_md": false, 00:11:41.450 "write_zeroes": true, 00:11:41.450 "zcopy": false, 00:11:41.450 "get_zone_info": false, 00:11:41.450 "zone_management": false, 00:11:41.450 "zone_append": false, 00:11:41.450 "compare": false, 00:11:41.450 "compare_and_write": false, 00:11:41.450 "abort": false, 00:11:41.450 "seek_hole": false, 00:11:41.450 "seek_data": false, 00:11:41.450 "copy": false, 00:11:41.450 "nvme_iov_md": false 00:11:41.450 }, 00:11:41.450 "memory_domains": [ 00:11:41.450 { 00:11:41.450 "dma_device_id": "system", 00:11:41.450 "dma_device_type": 1 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.450 "dma_device_type": 2 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "dma_device_id": "system", 00:11:41.450 "dma_device_type": 1 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.450 "dma_device_type": 2 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "dma_device_id": "system", 00:11:41.450 "dma_device_type": 1 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.450 "dma_device_type": 2 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "dma_device_id": "system", 00:11:41.450 "dma_device_type": 1 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.450 "dma_device_type": 2 00:11:41.450 } 00:11:41.450 ], 00:11:41.450 "driver_specific": { 00:11:41.450 "raid": { 00:11:41.450 "uuid": "f41f50bc-87ee-49cd-ae01-b973e0f2649f", 00:11:41.450 "strip_size_kb": 64, 00:11:41.450 "state": "online", 00:11:41.450 "raid_level": "concat", 00:11:41.450 "superblock": false, 00:11:41.450 "num_base_bdevs": 4, 00:11:41.450 "num_base_bdevs_discovered": 4, 00:11:41.450 "num_base_bdevs_operational": 4, 00:11:41.450 "base_bdevs_list": [ 00:11:41.450 { 00:11:41.450 "name": "NewBaseBdev", 00:11:41.450 "uuid": "d52508cc-9ac2-4c85-8b4d-44e6c79a3261", 00:11:41.450 "is_configured": true, 00:11:41.450 "data_offset": 0, 00:11:41.450 "data_size": 65536 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "name": "BaseBdev2", 00:11:41.450 "uuid": "3926aaa6-2c44-4dd0-bd3d-ef4537a2bdd4", 00:11:41.450 "is_configured": true, 00:11:41.450 "data_offset": 0, 00:11:41.450 "data_size": 65536 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "name": "BaseBdev3", 00:11:41.450 "uuid": "645e0e85-8621-404f-bf04-af04a2119032", 00:11:41.450 "is_configured": true, 00:11:41.450 "data_offset": 0, 00:11:41.450 "data_size": 65536 00:11:41.450 }, 00:11:41.450 { 00:11:41.450 "name": "BaseBdev4", 00:11:41.450 "uuid": "07a1e255-cee8-43da-b7ba-490209bd0909", 00:11:41.450 "is_configured": true, 00:11:41.450 "data_offset": 0, 00:11:41.450 "data_size": 65536 00:11:41.450 } 00:11:41.450 ] 00:11:41.450 } 00:11:41.450 } 00:11:41.450 }' 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.450 BaseBdev2 00:11:41.450 BaseBdev3 00:11:41.450 BaseBdev4' 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.450 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.451 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 [2024-12-13 08:22:53.910683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.716 [2024-12-13 08:22:53.910771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.716 [2024-12-13 08:22:53.910910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.716 [2024-12-13 08:22:53.911028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.716 [2024-12-13 08:22:53.911083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71460 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71460 ']' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71460 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71460 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71460' 00:11:41.716 killing process with pid 71460 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71460 00:11:41.716 [2024-12-13 08:22:53.958062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.716 08:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71460 00:11:42.292 [2024-12-13 08:22:54.408533] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.670 00:11:43.670 real 0m12.109s 00:11:43.670 user 0m19.130s 00:11:43.670 sys 0m2.117s 00:11:43.670 ************************************ 00:11:43.670 END TEST raid_state_function_test 00:11:43.670 ************************************ 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.670 08:22:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:43.670 08:22:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.670 08:22:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.670 08:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.670 ************************************ 00:11:43.670 START TEST raid_state_function_test_sb 00:11:43.670 ************************************ 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:43.670 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72137 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72137' 00:11:43.671 Process raid pid: 72137 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72137 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72137 ']' 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.671 08:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.671 [2024-12-13 08:22:55.803780] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:43.671 [2024-12-13 08:22:55.803990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.671 [2024-12-13 08:22:55.981009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.930 [2024-12-13 08:22:56.107672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.190 [2024-12-13 08:22:56.328831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.190 [2024-12-13 08:22:56.328965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.449 [2024-12-13 08:22:56.685771] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.449 [2024-12-13 08:22:56.685883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.449 [2024-12-13 08:22:56.685929] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.449 [2024-12-13 08:22:56.685967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.449 [2024-12-13 08:22:56.685995] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.449 [2024-12-13 08:22:56.686021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.449 [2024-12-13 08:22:56.686082] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.449 [2024-12-13 08:22:56.686119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.449 "name": "Existed_Raid", 00:11:44.449 "uuid": "ab271b91-dae1-4549-94b5-6625f791ed46", 00:11:44.449 "strip_size_kb": 64, 00:11:44.449 "state": "configuring", 00:11:44.449 "raid_level": "concat", 00:11:44.449 "superblock": true, 00:11:44.449 "num_base_bdevs": 4, 00:11:44.449 "num_base_bdevs_discovered": 0, 00:11:44.449 "num_base_bdevs_operational": 4, 00:11:44.449 "base_bdevs_list": [ 00:11:44.449 { 00:11:44.449 "name": "BaseBdev1", 00:11:44.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.449 "is_configured": false, 00:11:44.449 "data_offset": 0, 00:11:44.449 "data_size": 0 00:11:44.449 }, 00:11:44.449 { 00:11:44.449 "name": "BaseBdev2", 00:11:44.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.449 "is_configured": false, 00:11:44.449 "data_offset": 0, 00:11:44.449 "data_size": 0 00:11:44.449 }, 00:11:44.449 { 00:11:44.449 "name": "BaseBdev3", 00:11:44.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.449 "is_configured": false, 00:11:44.449 "data_offset": 0, 00:11:44.449 "data_size": 0 00:11:44.449 }, 00:11:44.449 { 00:11:44.449 "name": "BaseBdev4", 00:11:44.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.449 "is_configured": false, 00:11:44.449 "data_offset": 0, 00:11:44.449 "data_size": 0 00:11:44.449 } 00:11:44.449 ] 00:11:44.449 }' 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.449 08:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.018 [2024-12-13 08:22:57.156901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.018 [2024-12-13 08:22:57.156946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.018 [2024-12-13 08:22:57.168880] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.018 [2024-12-13 08:22:57.168971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.018 [2024-12-13 08:22:57.169003] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.018 [2024-12-13 08:22:57.169027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.018 [2024-12-13 08:22:57.169069] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.018 [2024-12-13 08:22:57.169111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.018 [2024-12-13 08:22:57.169160] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.018 [2024-12-13 08:22:57.169193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.018 [2024-12-13 08:22:57.220151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.018 BaseBdev1 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.018 [ 00:11:45.018 { 00:11:45.018 "name": "BaseBdev1", 00:11:45.018 "aliases": [ 00:11:45.018 "61bd4a27-5d05-4dff-8698-bb9085b5aa97" 00:11:45.018 ], 00:11:45.018 "product_name": "Malloc disk", 00:11:45.018 "block_size": 512, 00:11:45.018 "num_blocks": 65536, 00:11:45.018 "uuid": "61bd4a27-5d05-4dff-8698-bb9085b5aa97", 00:11:45.018 "assigned_rate_limits": { 00:11:45.018 "rw_ios_per_sec": 0, 00:11:45.018 "rw_mbytes_per_sec": 0, 00:11:45.018 "r_mbytes_per_sec": 0, 00:11:45.018 "w_mbytes_per_sec": 0 00:11:45.018 }, 00:11:45.018 "claimed": true, 00:11:45.018 "claim_type": "exclusive_write", 00:11:45.018 "zoned": false, 00:11:45.018 "supported_io_types": { 00:11:45.018 "read": true, 00:11:45.018 "write": true, 00:11:45.018 "unmap": true, 00:11:45.018 "flush": true, 00:11:45.018 "reset": true, 00:11:45.018 "nvme_admin": false, 00:11:45.018 "nvme_io": false, 00:11:45.018 "nvme_io_md": false, 00:11:45.018 "write_zeroes": true, 00:11:45.018 "zcopy": true, 00:11:45.018 "get_zone_info": false, 00:11:45.018 "zone_management": false, 00:11:45.018 "zone_append": false, 00:11:45.018 "compare": false, 00:11:45.018 "compare_and_write": false, 00:11:45.018 "abort": true, 00:11:45.018 "seek_hole": false, 00:11:45.018 "seek_data": false, 00:11:45.018 "copy": true, 00:11:45.018 "nvme_iov_md": false 00:11:45.018 }, 00:11:45.018 "memory_domains": [ 00:11:45.018 { 00:11:45.018 "dma_device_id": "system", 00:11:45.018 "dma_device_type": 1 00:11:45.018 }, 00:11:45.018 { 00:11:45.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.018 "dma_device_type": 2 00:11:45.018 } 00:11:45.018 ], 00:11:45.018 "driver_specific": {} 00:11:45.018 } 00:11:45.018 ] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.018 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.018 "name": "Existed_Raid", 00:11:45.018 "uuid": "943d919b-5202-4c85-9e60-84fcdb7e0986", 00:11:45.018 "strip_size_kb": 64, 00:11:45.018 "state": "configuring", 00:11:45.018 "raid_level": "concat", 00:11:45.018 "superblock": true, 00:11:45.018 "num_base_bdevs": 4, 00:11:45.018 "num_base_bdevs_discovered": 1, 00:11:45.018 "num_base_bdevs_operational": 4, 00:11:45.018 "base_bdevs_list": [ 00:11:45.018 { 00:11:45.018 "name": "BaseBdev1", 00:11:45.018 "uuid": "61bd4a27-5d05-4dff-8698-bb9085b5aa97", 00:11:45.019 "is_configured": true, 00:11:45.019 "data_offset": 2048, 00:11:45.019 "data_size": 63488 00:11:45.019 }, 00:11:45.019 { 00:11:45.019 "name": "BaseBdev2", 00:11:45.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.019 "is_configured": false, 00:11:45.019 "data_offset": 0, 00:11:45.019 "data_size": 0 00:11:45.019 }, 00:11:45.019 { 00:11:45.019 "name": "BaseBdev3", 00:11:45.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.019 "is_configured": false, 00:11:45.019 "data_offset": 0, 00:11:45.019 "data_size": 0 00:11:45.019 }, 00:11:45.019 { 00:11:45.019 "name": "BaseBdev4", 00:11:45.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.019 "is_configured": false, 00:11:45.019 "data_offset": 0, 00:11:45.019 "data_size": 0 00:11:45.019 } 00:11:45.019 ] 00:11:45.019 }' 00:11:45.019 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.019 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.587 [2024-12-13 08:22:57.723352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.587 [2024-12-13 08:22:57.723415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.587 [2024-12-13 08:22:57.735385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.587 [2024-12-13 08:22:57.737443] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.587 [2024-12-13 08:22:57.737541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.587 [2024-12-13 08:22:57.737577] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.587 [2024-12-13 08:22:57.737605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.587 [2024-12-13 08:22:57.737640] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.587 [2024-12-13 08:22:57.737675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.587 "name": "Existed_Raid", 00:11:45.587 "uuid": "63fad0a2-7841-4263-a794-326b8319951a", 00:11:45.587 "strip_size_kb": 64, 00:11:45.587 "state": "configuring", 00:11:45.587 "raid_level": "concat", 00:11:45.587 "superblock": true, 00:11:45.587 "num_base_bdevs": 4, 00:11:45.587 "num_base_bdevs_discovered": 1, 00:11:45.587 "num_base_bdevs_operational": 4, 00:11:45.587 "base_bdevs_list": [ 00:11:45.587 { 00:11:45.587 "name": "BaseBdev1", 00:11:45.587 "uuid": "61bd4a27-5d05-4dff-8698-bb9085b5aa97", 00:11:45.587 "is_configured": true, 00:11:45.587 "data_offset": 2048, 00:11:45.587 "data_size": 63488 00:11:45.587 }, 00:11:45.587 { 00:11:45.587 "name": "BaseBdev2", 00:11:45.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.587 "is_configured": false, 00:11:45.587 "data_offset": 0, 00:11:45.587 "data_size": 0 00:11:45.587 }, 00:11:45.587 { 00:11:45.587 "name": "BaseBdev3", 00:11:45.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.587 "is_configured": false, 00:11:45.587 "data_offset": 0, 00:11:45.587 "data_size": 0 00:11:45.587 }, 00:11:45.587 { 00:11:45.587 "name": "BaseBdev4", 00:11:45.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.587 "is_configured": false, 00:11:45.587 "data_offset": 0, 00:11:45.587 "data_size": 0 00:11:45.587 } 00:11:45.587 ] 00:11:45.587 }' 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.587 08:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.847 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:45.847 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.847 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.107 [2024-12-13 08:22:58.217082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.107 BaseBdev2 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.107 [ 00:11:46.107 { 00:11:46.107 "name": "BaseBdev2", 00:11:46.107 "aliases": [ 00:11:46.107 "cec54ac5-e1be-4f3e-a3ef-746d01a227ec" 00:11:46.107 ], 00:11:46.107 "product_name": "Malloc disk", 00:11:46.107 "block_size": 512, 00:11:46.107 "num_blocks": 65536, 00:11:46.107 "uuid": "cec54ac5-e1be-4f3e-a3ef-746d01a227ec", 00:11:46.107 "assigned_rate_limits": { 00:11:46.107 "rw_ios_per_sec": 0, 00:11:46.107 "rw_mbytes_per_sec": 0, 00:11:46.107 "r_mbytes_per_sec": 0, 00:11:46.107 "w_mbytes_per_sec": 0 00:11:46.107 }, 00:11:46.107 "claimed": true, 00:11:46.107 "claim_type": "exclusive_write", 00:11:46.107 "zoned": false, 00:11:46.107 "supported_io_types": { 00:11:46.107 "read": true, 00:11:46.107 "write": true, 00:11:46.107 "unmap": true, 00:11:46.107 "flush": true, 00:11:46.107 "reset": true, 00:11:46.107 "nvme_admin": false, 00:11:46.107 "nvme_io": false, 00:11:46.107 "nvme_io_md": false, 00:11:46.107 "write_zeroes": true, 00:11:46.107 "zcopy": true, 00:11:46.107 "get_zone_info": false, 00:11:46.107 "zone_management": false, 00:11:46.107 "zone_append": false, 00:11:46.107 "compare": false, 00:11:46.107 "compare_and_write": false, 00:11:46.107 "abort": true, 00:11:46.107 "seek_hole": false, 00:11:46.107 "seek_data": false, 00:11:46.107 "copy": true, 00:11:46.107 "nvme_iov_md": false 00:11:46.107 }, 00:11:46.107 "memory_domains": [ 00:11:46.107 { 00:11:46.107 "dma_device_id": "system", 00:11:46.107 "dma_device_type": 1 00:11:46.107 }, 00:11:46.107 { 00:11:46.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.107 "dma_device_type": 2 00:11:46.107 } 00:11:46.107 ], 00:11:46.107 "driver_specific": {} 00:11:46.107 } 00:11:46.107 ] 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.107 "name": "Existed_Raid", 00:11:46.107 "uuid": "63fad0a2-7841-4263-a794-326b8319951a", 00:11:46.107 "strip_size_kb": 64, 00:11:46.107 "state": "configuring", 00:11:46.107 "raid_level": "concat", 00:11:46.107 "superblock": true, 00:11:46.107 "num_base_bdevs": 4, 00:11:46.107 "num_base_bdevs_discovered": 2, 00:11:46.107 "num_base_bdevs_operational": 4, 00:11:46.107 "base_bdevs_list": [ 00:11:46.107 { 00:11:46.107 "name": "BaseBdev1", 00:11:46.107 "uuid": "61bd4a27-5d05-4dff-8698-bb9085b5aa97", 00:11:46.107 "is_configured": true, 00:11:46.107 "data_offset": 2048, 00:11:46.107 "data_size": 63488 00:11:46.107 }, 00:11:46.107 { 00:11:46.107 "name": "BaseBdev2", 00:11:46.107 "uuid": "cec54ac5-e1be-4f3e-a3ef-746d01a227ec", 00:11:46.107 "is_configured": true, 00:11:46.107 "data_offset": 2048, 00:11:46.107 "data_size": 63488 00:11:46.107 }, 00:11:46.107 { 00:11:46.107 "name": "BaseBdev3", 00:11:46.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.107 "is_configured": false, 00:11:46.107 "data_offset": 0, 00:11:46.107 "data_size": 0 00:11:46.107 }, 00:11:46.107 { 00:11:46.107 "name": "BaseBdev4", 00:11:46.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.107 "is_configured": false, 00:11:46.107 "data_offset": 0, 00:11:46.107 "data_size": 0 00:11:46.107 } 00:11:46.107 ] 00:11:46.107 }' 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.107 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.367 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.367 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.367 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.626 [2024-12-13 08:22:58.780429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.626 BaseBdev3 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.626 [ 00:11:46.626 { 00:11:46.626 "name": "BaseBdev3", 00:11:46.626 "aliases": [ 00:11:46.626 "2923785e-b2a8-4239-9358-c76a6efce3b4" 00:11:46.626 ], 00:11:46.626 "product_name": "Malloc disk", 00:11:46.626 "block_size": 512, 00:11:46.626 "num_blocks": 65536, 00:11:46.626 "uuid": "2923785e-b2a8-4239-9358-c76a6efce3b4", 00:11:46.626 "assigned_rate_limits": { 00:11:46.626 "rw_ios_per_sec": 0, 00:11:46.626 "rw_mbytes_per_sec": 0, 00:11:46.626 "r_mbytes_per_sec": 0, 00:11:46.626 "w_mbytes_per_sec": 0 00:11:46.626 }, 00:11:46.626 "claimed": true, 00:11:46.626 "claim_type": "exclusive_write", 00:11:46.626 "zoned": false, 00:11:46.626 "supported_io_types": { 00:11:46.626 "read": true, 00:11:46.626 "write": true, 00:11:46.626 "unmap": true, 00:11:46.626 "flush": true, 00:11:46.626 "reset": true, 00:11:46.626 "nvme_admin": false, 00:11:46.626 "nvme_io": false, 00:11:46.626 "nvme_io_md": false, 00:11:46.626 "write_zeroes": true, 00:11:46.626 "zcopy": true, 00:11:46.626 "get_zone_info": false, 00:11:46.626 "zone_management": false, 00:11:46.626 "zone_append": false, 00:11:46.626 "compare": false, 00:11:46.626 "compare_and_write": false, 00:11:46.626 "abort": true, 00:11:46.626 "seek_hole": false, 00:11:46.626 "seek_data": false, 00:11:46.626 "copy": true, 00:11:46.626 "nvme_iov_md": false 00:11:46.626 }, 00:11:46.626 "memory_domains": [ 00:11:46.626 { 00:11:46.626 "dma_device_id": "system", 00:11:46.626 "dma_device_type": 1 00:11:46.626 }, 00:11:46.626 { 00:11:46.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.626 "dma_device_type": 2 00:11:46.626 } 00:11:46.626 ], 00:11:46.626 "driver_specific": {} 00:11:46.626 } 00:11:46.626 ] 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.626 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.626 "name": "Existed_Raid", 00:11:46.626 "uuid": "63fad0a2-7841-4263-a794-326b8319951a", 00:11:46.626 "strip_size_kb": 64, 00:11:46.627 "state": "configuring", 00:11:46.627 "raid_level": "concat", 00:11:46.627 "superblock": true, 00:11:46.627 "num_base_bdevs": 4, 00:11:46.627 "num_base_bdevs_discovered": 3, 00:11:46.627 "num_base_bdevs_operational": 4, 00:11:46.627 "base_bdevs_list": [ 00:11:46.627 { 00:11:46.627 "name": "BaseBdev1", 00:11:46.627 "uuid": "61bd4a27-5d05-4dff-8698-bb9085b5aa97", 00:11:46.627 "is_configured": true, 00:11:46.627 "data_offset": 2048, 00:11:46.627 "data_size": 63488 00:11:46.627 }, 00:11:46.627 { 00:11:46.627 "name": "BaseBdev2", 00:11:46.627 "uuid": "cec54ac5-e1be-4f3e-a3ef-746d01a227ec", 00:11:46.627 "is_configured": true, 00:11:46.627 "data_offset": 2048, 00:11:46.627 "data_size": 63488 00:11:46.627 }, 00:11:46.627 { 00:11:46.627 "name": "BaseBdev3", 00:11:46.627 "uuid": "2923785e-b2a8-4239-9358-c76a6efce3b4", 00:11:46.627 "is_configured": true, 00:11:46.627 "data_offset": 2048, 00:11:46.627 "data_size": 63488 00:11:46.627 }, 00:11:46.627 { 00:11:46.627 "name": "BaseBdev4", 00:11:46.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.627 "is_configured": false, 00:11:46.627 "data_offset": 0, 00:11:46.627 "data_size": 0 00:11:46.627 } 00:11:46.627 ] 00:11:46.627 }' 00:11:46.627 08:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.627 08:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.885 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:46.885 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.885 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.144 [2024-12-13 08:22:59.289622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.144 [2024-12-13 08:22:59.289941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.144 [2024-12-13 08:22:59.289959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.144 [2024-12-13 08:22:59.290291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:47.144 [2024-12-13 08:22:59.290470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.144 [2024-12-13 08:22:59.290483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:47.144 [2024-12-13 08:22:59.290649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.144 BaseBdev4 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.144 [ 00:11:47.144 { 00:11:47.144 "name": "BaseBdev4", 00:11:47.144 "aliases": [ 00:11:47.144 "6f85e012-813b-4749-b14f-f55158d89f84" 00:11:47.144 ], 00:11:47.144 "product_name": "Malloc disk", 00:11:47.144 "block_size": 512, 00:11:47.144 "num_blocks": 65536, 00:11:47.144 "uuid": "6f85e012-813b-4749-b14f-f55158d89f84", 00:11:47.144 "assigned_rate_limits": { 00:11:47.144 "rw_ios_per_sec": 0, 00:11:47.144 "rw_mbytes_per_sec": 0, 00:11:47.144 "r_mbytes_per_sec": 0, 00:11:47.144 "w_mbytes_per_sec": 0 00:11:47.144 }, 00:11:47.144 "claimed": true, 00:11:47.144 "claim_type": "exclusive_write", 00:11:47.144 "zoned": false, 00:11:47.144 "supported_io_types": { 00:11:47.144 "read": true, 00:11:47.144 "write": true, 00:11:47.144 "unmap": true, 00:11:47.144 "flush": true, 00:11:47.144 "reset": true, 00:11:47.144 "nvme_admin": false, 00:11:47.144 "nvme_io": false, 00:11:47.144 "nvme_io_md": false, 00:11:47.144 "write_zeroes": true, 00:11:47.144 "zcopy": true, 00:11:47.144 "get_zone_info": false, 00:11:47.144 "zone_management": false, 00:11:47.144 "zone_append": false, 00:11:47.144 "compare": false, 00:11:47.144 "compare_and_write": false, 00:11:47.144 "abort": true, 00:11:47.144 "seek_hole": false, 00:11:47.144 "seek_data": false, 00:11:47.144 "copy": true, 00:11:47.144 "nvme_iov_md": false 00:11:47.144 }, 00:11:47.144 "memory_domains": [ 00:11:47.144 { 00:11:47.144 "dma_device_id": "system", 00:11:47.144 "dma_device_type": 1 00:11:47.144 }, 00:11:47.144 { 00:11:47.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.144 "dma_device_type": 2 00:11:47.144 } 00:11:47.144 ], 00:11:47.144 "driver_specific": {} 00:11:47.144 } 00:11:47.144 ] 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.144 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.145 "name": "Existed_Raid", 00:11:47.145 "uuid": "63fad0a2-7841-4263-a794-326b8319951a", 00:11:47.145 "strip_size_kb": 64, 00:11:47.145 "state": "online", 00:11:47.145 "raid_level": "concat", 00:11:47.145 "superblock": true, 00:11:47.145 "num_base_bdevs": 4, 00:11:47.145 "num_base_bdevs_discovered": 4, 00:11:47.145 "num_base_bdevs_operational": 4, 00:11:47.145 "base_bdevs_list": [ 00:11:47.145 { 00:11:47.145 "name": "BaseBdev1", 00:11:47.145 "uuid": "61bd4a27-5d05-4dff-8698-bb9085b5aa97", 00:11:47.145 "is_configured": true, 00:11:47.145 "data_offset": 2048, 00:11:47.145 "data_size": 63488 00:11:47.145 }, 00:11:47.145 { 00:11:47.145 "name": "BaseBdev2", 00:11:47.145 "uuid": "cec54ac5-e1be-4f3e-a3ef-746d01a227ec", 00:11:47.145 "is_configured": true, 00:11:47.145 "data_offset": 2048, 00:11:47.145 "data_size": 63488 00:11:47.145 }, 00:11:47.145 { 00:11:47.145 "name": "BaseBdev3", 00:11:47.145 "uuid": "2923785e-b2a8-4239-9358-c76a6efce3b4", 00:11:47.145 "is_configured": true, 00:11:47.145 "data_offset": 2048, 00:11:47.145 "data_size": 63488 00:11:47.145 }, 00:11:47.145 { 00:11:47.145 "name": "BaseBdev4", 00:11:47.145 "uuid": "6f85e012-813b-4749-b14f-f55158d89f84", 00:11:47.145 "is_configured": true, 00:11:47.145 "data_offset": 2048, 00:11:47.145 "data_size": 63488 00:11:47.145 } 00:11:47.145 ] 00:11:47.145 }' 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.145 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.425 [2024-12-13 08:22:59.737401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.425 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.425 "name": "Existed_Raid", 00:11:47.425 "aliases": [ 00:11:47.425 "63fad0a2-7841-4263-a794-326b8319951a" 00:11:47.425 ], 00:11:47.425 "product_name": "Raid Volume", 00:11:47.425 "block_size": 512, 00:11:47.425 "num_blocks": 253952, 00:11:47.425 "uuid": "63fad0a2-7841-4263-a794-326b8319951a", 00:11:47.425 "assigned_rate_limits": { 00:11:47.425 "rw_ios_per_sec": 0, 00:11:47.425 "rw_mbytes_per_sec": 0, 00:11:47.425 "r_mbytes_per_sec": 0, 00:11:47.425 "w_mbytes_per_sec": 0 00:11:47.425 }, 00:11:47.425 "claimed": false, 00:11:47.425 "zoned": false, 00:11:47.425 "supported_io_types": { 00:11:47.425 "read": true, 00:11:47.425 "write": true, 00:11:47.425 "unmap": true, 00:11:47.425 "flush": true, 00:11:47.425 "reset": true, 00:11:47.425 "nvme_admin": false, 00:11:47.425 "nvme_io": false, 00:11:47.425 "nvme_io_md": false, 00:11:47.425 "write_zeroes": true, 00:11:47.425 "zcopy": false, 00:11:47.425 "get_zone_info": false, 00:11:47.425 "zone_management": false, 00:11:47.425 "zone_append": false, 00:11:47.425 "compare": false, 00:11:47.425 "compare_and_write": false, 00:11:47.425 "abort": false, 00:11:47.425 "seek_hole": false, 00:11:47.425 "seek_data": false, 00:11:47.425 "copy": false, 00:11:47.425 "nvme_iov_md": false 00:11:47.425 }, 00:11:47.425 "memory_domains": [ 00:11:47.425 { 00:11:47.425 "dma_device_id": "system", 00:11:47.425 "dma_device_type": 1 00:11:47.425 }, 00:11:47.425 { 00:11:47.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.425 "dma_device_type": 2 00:11:47.425 }, 00:11:47.425 { 00:11:47.425 "dma_device_id": "system", 00:11:47.425 "dma_device_type": 1 00:11:47.425 }, 00:11:47.425 { 00:11:47.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.425 "dma_device_type": 2 00:11:47.425 }, 00:11:47.425 { 00:11:47.425 "dma_device_id": "system", 00:11:47.425 "dma_device_type": 1 00:11:47.425 }, 00:11:47.425 { 00:11:47.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.425 "dma_device_type": 2 00:11:47.425 }, 00:11:47.425 { 00:11:47.426 "dma_device_id": "system", 00:11:47.426 "dma_device_type": 1 00:11:47.426 }, 00:11:47.426 { 00:11:47.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.426 "dma_device_type": 2 00:11:47.426 } 00:11:47.426 ], 00:11:47.426 "driver_specific": { 00:11:47.426 "raid": { 00:11:47.426 "uuid": "63fad0a2-7841-4263-a794-326b8319951a", 00:11:47.426 "strip_size_kb": 64, 00:11:47.426 "state": "online", 00:11:47.426 "raid_level": "concat", 00:11:47.426 "superblock": true, 00:11:47.426 "num_base_bdevs": 4, 00:11:47.426 "num_base_bdevs_discovered": 4, 00:11:47.426 "num_base_bdevs_operational": 4, 00:11:47.426 "base_bdevs_list": [ 00:11:47.426 { 00:11:47.426 "name": "BaseBdev1", 00:11:47.426 "uuid": "61bd4a27-5d05-4dff-8698-bb9085b5aa97", 00:11:47.426 "is_configured": true, 00:11:47.426 "data_offset": 2048, 00:11:47.426 "data_size": 63488 00:11:47.426 }, 00:11:47.426 { 00:11:47.426 "name": "BaseBdev2", 00:11:47.426 "uuid": "cec54ac5-e1be-4f3e-a3ef-746d01a227ec", 00:11:47.426 "is_configured": true, 00:11:47.426 "data_offset": 2048, 00:11:47.426 "data_size": 63488 00:11:47.426 }, 00:11:47.426 { 00:11:47.426 "name": "BaseBdev3", 00:11:47.426 "uuid": "2923785e-b2a8-4239-9358-c76a6efce3b4", 00:11:47.426 "is_configured": true, 00:11:47.426 "data_offset": 2048, 00:11:47.426 "data_size": 63488 00:11:47.426 }, 00:11:47.426 { 00:11:47.426 "name": "BaseBdev4", 00:11:47.426 "uuid": "6f85e012-813b-4749-b14f-f55158d89f84", 00:11:47.426 "is_configured": true, 00:11:47.426 "data_offset": 2048, 00:11:47.426 "data_size": 63488 00:11:47.426 } 00:11:47.426 ] 00:11:47.426 } 00:11:47.426 } 00:11:47.426 }' 00:11:47.684 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.684 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:47.684 BaseBdev2 00:11:47.684 BaseBdev3 00:11:47.685 BaseBdev4' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.685 08:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.685 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.943 [2024-12-13 08:23:00.064469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.943 [2024-12-13 08:23:00.064503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.943 [2024-12-13 08:23:00.064560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.943 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.943 "name": "Existed_Raid", 00:11:47.943 "uuid": "63fad0a2-7841-4263-a794-326b8319951a", 00:11:47.943 "strip_size_kb": 64, 00:11:47.943 "state": "offline", 00:11:47.943 "raid_level": "concat", 00:11:47.943 "superblock": true, 00:11:47.943 "num_base_bdevs": 4, 00:11:47.943 "num_base_bdevs_discovered": 3, 00:11:47.943 "num_base_bdevs_operational": 3, 00:11:47.943 "base_bdevs_list": [ 00:11:47.943 { 00:11:47.943 "name": null, 00:11:47.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.943 "is_configured": false, 00:11:47.943 "data_offset": 0, 00:11:47.943 "data_size": 63488 00:11:47.943 }, 00:11:47.943 { 00:11:47.943 "name": "BaseBdev2", 00:11:47.943 "uuid": "cec54ac5-e1be-4f3e-a3ef-746d01a227ec", 00:11:47.943 "is_configured": true, 00:11:47.943 "data_offset": 2048, 00:11:47.944 "data_size": 63488 00:11:47.944 }, 00:11:47.944 { 00:11:47.944 "name": "BaseBdev3", 00:11:47.944 "uuid": "2923785e-b2a8-4239-9358-c76a6efce3b4", 00:11:47.944 "is_configured": true, 00:11:47.944 "data_offset": 2048, 00:11:47.944 "data_size": 63488 00:11:47.944 }, 00:11:47.944 { 00:11:47.944 "name": "BaseBdev4", 00:11:47.944 "uuid": "6f85e012-813b-4749-b14f-f55158d89f84", 00:11:47.944 "is_configured": true, 00:11:47.944 "data_offset": 2048, 00:11:47.944 "data_size": 63488 00:11:47.944 } 00:11:47.944 ] 00:11:47.944 }' 00:11:47.944 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.944 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 [2024-12-13 08:23:00.683836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.511 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 [2024-12-13 08:23:00.848491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.770 08:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.770 [2024-12-13 08:23:01.018895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:48.770 [2024-12-13 08:23:01.018960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.770 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.030 BaseBdev2 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.030 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 [ 00:11:49.031 { 00:11:49.031 "name": "BaseBdev2", 00:11:49.031 "aliases": [ 00:11:49.031 "67848015-ba53-428f-9e6f-11b3d9cdd0cb" 00:11:49.031 ], 00:11:49.031 "product_name": "Malloc disk", 00:11:49.031 "block_size": 512, 00:11:49.031 "num_blocks": 65536, 00:11:49.031 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:49.031 "assigned_rate_limits": { 00:11:49.031 "rw_ios_per_sec": 0, 00:11:49.031 "rw_mbytes_per_sec": 0, 00:11:49.031 "r_mbytes_per_sec": 0, 00:11:49.031 "w_mbytes_per_sec": 0 00:11:49.031 }, 00:11:49.031 "claimed": false, 00:11:49.031 "zoned": false, 00:11:49.031 "supported_io_types": { 00:11:49.031 "read": true, 00:11:49.031 "write": true, 00:11:49.031 "unmap": true, 00:11:49.031 "flush": true, 00:11:49.031 "reset": true, 00:11:49.031 "nvme_admin": false, 00:11:49.031 "nvme_io": false, 00:11:49.031 "nvme_io_md": false, 00:11:49.031 "write_zeroes": true, 00:11:49.031 "zcopy": true, 00:11:49.031 "get_zone_info": false, 00:11:49.031 "zone_management": false, 00:11:49.031 "zone_append": false, 00:11:49.031 "compare": false, 00:11:49.031 "compare_and_write": false, 00:11:49.031 "abort": true, 00:11:49.031 "seek_hole": false, 00:11:49.031 "seek_data": false, 00:11:49.031 "copy": true, 00:11:49.031 "nvme_iov_md": false 00:11:49.031 }, 00:11:49.031 "memory_domains": [ 00:11:49.031 { 00:11:49.031 "dma_device_id": "system", 00:11:49.031 "dma_device_type": 1 00:11:49.031 }, 00:11:49.031 { 00:11:49.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.031 "dma_device_type": 2 00:11:49.031 } 00:11:49.031 ], 00:11:49.031 "driver_specific": {} 00:11:49.031 } 00:11:49.031 ] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 BaseBdev3 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 [ 00:11:49.031 { 00:11:49.031 "name": "BaseBdev3", 00:11:49.031 "aliases": [ 00:11:49.031 "121a2e39-c3a7-4133-b493-73f88206e091" 00:11:49.031 ], 00:11:49.031 "product_name": "Malloc disk", 00:11:49.031 "block_size": 512, 00:11:49.031 "num_blocks": 65536, 00:11:49.031 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:49.031 "assigned_rate_limits": { 00:11:49.031 "rw_ios_per_sec": 0, 00:11:49.031 "rw_mbytes_per_sec": 0, 00:11:49.031 "r_mbytes_per_sec": 0, 00:11:49.031 "w_mbytes_per_sec": 0 00:11:49.031 }, 00:11:49.031 "claimed": false, 00:11:49.031 "zoned": false, 00:11:49.031 "supported_io_types": { 00:11:49.031 "read": true, 00:11:49.031 "write": true, 00:11:49.031 "unmap": true, 00:11:49.031 "flush": true, 00:11:49.031 "reset": true, 00:11:49.031 "nvme_admin": false, 00:11:49.031 "nvme_io": false, 00:11:49.031 "nvme_io_md": false, 00:11:49.031 "write_zeroes": true, 00:11:49.031 "zcopy": true, 00:11:49.031 "get_zone_info": false, 00:11:49.031 "zone_management": false, 00:11:49.031 "zone_append": false, 00:11:49.031 "compare": false, 00:11:49.031 "compare_and_write": false, 00:11:49.031 "abort": true, 00:11:49.031 "seek_hole": false, 00:11:49.031 "seek_data": false, 00:11:49.031 "copy": true, 00:11:49.031 "nvme_iov_md": false 00:11:49.031 }, 00:11:49.031 "memory_domains": [ 00:11:49.031 { 00:11:49.031 "dma_device_id": "system", 00:11:49.031 "dma_device_type": 1 00:11:49.031 }, 00:11:49.031 { 00:11:49.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.031 "dma_device_type": 2 00:11:49.031 } 00:11:49.031 ], 00:11:49.031 "driver_specific": {} 00:11:49.031 } 00:11:49.031 ] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 BaseBdev4 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.031 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.031 [ 00:11:49.031 { 00:11:49.031 "name": "BaseBdev4", 00:11:49.031 "aliases": [ 00:11:49.031 "ae9b6f8c-da49-4333-b042-0ba4cb20b333" 00:11:49.031 ], 00:11:49.031 "product_name": "Malloc disk", 00:11:49.031 "block_size": 512, 00:11:49.031 "num_blocks": 65536, 00:11:49.031 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:49.031 "assigned_rate_limits": { 00:11:49.031 "rw_ios_per_sec": 0, 00:11:49.031 "rw_mbytes_per_sec": 0, 00:11:49.031 "r_mbytes_per_sec": 0, 00:11:49.031 "w_mbytes_per_sec": 0 00:11:49.031 }, 00:11:49.031 "claimed": false, 00:11:49.031 "zoned": false, 00:11:49.031 "supported_io_types": { 00:11:49.031 "read": true, 00:11:49.031 "write": true, 00:11:49.031 "unmap": true, 00:11:49.031 "flush": true, 00:11:49.031 "reset": true, 00:11:49.031 "nvme_admin": false, 00:11:49.031 "nvme_io": false, 00:11:49.031 "nvme_io_md": false, 00:11:49.031 "write_zeroes": true, 00:11:49.031 "zcopy": true, 00:11:49.031 "get_zone_info": false, 00:11:49.031 "zone_management": false, 00:11:49.031 "zone_append": false, 00:11:49.031 "compare": false, 00:11:49.031 "compare_and_write": false, 00:11:49.031 "abort": true, 00:11:49.031 "seek_hole": false, 00:11:49.031 "seek_data": false, 00:11:49.031 "copy": true, 00:11:49.031 "nvme_iov_md": false 00:11:49.031 }, 00:11:49.031 "memory_domains": [ 00:11:49.031 { 00:11:49.031 "dma_device_id": "system", 00:11:49.031 "dma_device_type": 1 00:11:49.031 }, 00:11:49.031 { 00:11:49.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.031 "dma_device_type": 2 00:11:49.031 } 00:11:49.031 ], 00:11:49.031 "driver_specific": {} 00:11:49.032 } 00:11:49.032 ] 00:11:49.032 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.032 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.032 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.032 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.032 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.032 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.032 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.291 [2024-12-13 08:23:01.397165] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.291 [2024-12-13 08:23:01.397271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.291 [2024-12-13 08:23:01.397328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.291 [2024-12-13 08:23:01.399521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.291 [2024-12-13 08:23:01.399629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.291 "name": "Existed_Raid", 00:11:49.291 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:49.291 "strip_size_kb": 64, 00:11:49.291 "state": "configuring", 00:11:49.291 "raid_level": "concat", 00:11:49.291 "superblock": true, 00:11:49.291 "num_base_bdevs": 4, 00:11:49.291 "num_base_bdevs_discovered": 3, 00:11:49.291 "num_base_bdevs_operational": 4, 00:11:49.291 "base_bdevs_list": [ 00:11:49.291 { 00:11:49.291 "name": "BaseBdev1", 00:11:49.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.291 "is_configured": false, 00:11:49.291 "data_offset": 0, 00:11:49.291 "data_size": 0 00:11:49.291 }, 00:11:49.291 { 00:11:49.291 "name": "BaseBdev2", 00:11:49.291 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:49.291 "is_configured": true, 00:11:49.291 "data_offset": 2048, 00:11:49.291 "data_size": 63488 00:11:49.291 }, 00:11:49.291 { 00:11:49.291 "name": "BaseBdev3", 00:11:49.291 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:49.291 "is_configured": true, 00:11:49.291 "data_offset": 2048, 00:11:49.291 "data_size": 63488 00:11:49.291 }, 00:11:49.291 { 00:11:49.291 "name": "BaseBdev4", 00:11:49.291 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:49.291 "is_configured": true, 00:11:49.291 "data_offset": 2048, 00:11:49.291 "data_size": 63488 00:11:49.291 } 00:11:49.291 ] 00:11:49.291 }' 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.291 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.550 [2024-12-13 08:23:01.828442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.550 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.551 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.551 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.551 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.551 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.551 "name": "Existed_Raid", 00:11:49.551 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:49.551 "strip_size_kb": 64, 00:11:49.551 "state": "configuring", 00:11:49.551 "raid_level": "concat", 00:11:49.551 "superblock": true, 00:11:49.551 "num_base_bdevs": 4, 00:11:49.551 "num_base_bdevs_discovered": 2, 00:11:49.551 "num_base_bdevs_operational": 4, 00:11:49.551 "base_bdevs_list": [ 00:11:49.551 { 00:11:49.551 "name": "BaseBdev1", 00:11:49.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.551 "is_configured": false, 00:11:49.551 "data_offset": 0, 00:11:49.551 "data_size": 0 00:11:49.551 }, 00:11:49.551 { 00:11:49.551 "name": null, 00:11:49.551 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:49.551 "is_configured": false, 00:11:49.551 "data_offset": 0, 00:11:49.551 "data_size": 63488 00:11:49.551 }, 00:11:49.551 { 00:11:49.551 "name": "BaseBdev3", 00:11:49.551 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:49.551 "is_configured": true, 00:11:49.551 "data_offset": 2048, 00:11:49.551 "data_size": 63488 00:11:49.551 }, 00:11:49.551 { 00:11:49.551 "name": "BaseBdev4", 00:11:49.551 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:49.551 "is_configured": true, 00:11:49.551 "data_offset": 2048, 00:11:49.551 "data_size": 63488 00:11:49.551 } 00:11:49.551 ] 00:11:49.551 }' 00:11:49.551 08:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.551 08:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.124 [2024-12-13 08:23:02.360499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.124 BaseBdev1 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.124 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.124 [ 00:11:50.124 { 00:11:50.124 "name": "BaseBdev1", 00:11:50.124 "aliases": [ 00:11:50.124 "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17" 00:11:50.124 ], 00:11:50.124 "product_name": "Malloc disk", 00:11:50.124 "block_size": 512, 00:11:50.124 "num_blocks": 65536, 00:11:50.124 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:50.124 "assigned_rate_limits": { 00:11:50.124 "rw_ios_per_sec": 0, 00:11:50.124 "rw_mbytes_per_sec": 0, 00:11:50.124 "r_mbytes_per_sec": 0, 00:11:50.124 "w_mbytes_per_sec": 0 00:11:50.124 }, 00:11:50.124 "claimed": true, 00:11:50.124 "claim_type": "exclusive_write", 00:11:50.124 "zoned": false, 00:11:50.124 "supported_io_types": { 00:11:50.124 "read": true, 00:11:50.124 "write": true, 00:11:50.124 "unmap": true, 00:11:50.124 "flush": true, 00:11:50.124 "reset": true, 00:11:50.124 "nvme_admin": false, 00:11:50.124 "nvme_io": false, 00:11:50.124 "nvme_io_md": false, 00:11:50.124 "write_zeroes": true, 00:11:50.124 "zcopy": true, 00:11:50.124 "get_zone_info": false, 00:11:50.124 "zone_management": false, 00:11:50.124 "zone_append": false, 00:11:50.124 "compare": false, 00:11:50.124 "compare_and_write": false, 00:11:50.124 "abort": true, 00:11:50.124 "seek_hole": false, 00:11:50.124 "seek_data": false, 00:11:50.124 "copy": true, 00:11:50.124 "nvme_iov_md": false 00:11:50.124 }, 00:11:50.124 "memory_domains": [ 00:11:50.124 { 00:11:50.124 "dma_device_id": "system", 00:11:50.124 "dma_device_type": 1 00:11:50.124 }, 00:11:50.124 { 00:11:50.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.124 "dma_device_type": 2 00:11:50.124 } 00:11:50.124 ], 00:11:50.125 "driver_specific": {} 00:11:50.125 } 00:11:50.125 ] 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.125 "name": "Existed_Raid", 00:11:50.125 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:50.125 "strip_size_kb": 64, 00:11:50.125 "state": "configuring", 00:11:50.125 "raid_level": "concat", 00:11:50.125 "superblock": true, 00:11:50.125 "num_base_bdevs": 4, 00:11:50.125 "num_base_bdevs_discovered": 3, 00:11:50.125 "num_base_bdevs_operational": 4, 00:11:50.125 "base_bdevs_list": [ 00:11:50.125 { 00:11:50.125 "name": "BaseBdev1", 00:11:50.125 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:50.125 "is_configured": true, 00:11:50.125 "data_offset": 2048, 00:11:50.125 "data_size": 63488 00:11:50.125 }, 00:11:50.125 { 00:11:50.125 "name": null, 00:11:50.125 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:50.125 "is_configured": false, 00:11:50.125 "data_offset": 0, 00:11:50.125 "data_size": 63488 00:11:50.125 }, 00:11:50.125 { 00:11:50.125 "name": "BaseBdev3", 00:11:50.125 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:50.125 "is_configured": true, 00:11:50.125 "data_offset": 2048, 00:11:50.125 "data_size": 63488 00:11:50.125 }, 00:11:50.125 { 00:11:50.125 "name": "BaseBdev4", 00:11:50.125 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:50.125 "is_configured": true, 00:11:50.125 "data_offset": 2048, 00:11:50.125 "data_size": 63488 00:11:50.125 } 00:11:50.125 ] 00:11:50.125 }' 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.125 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.693 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.693 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.693 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.693 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 [2024-12-13 08:23:02.907676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.694 "name": "Existed_Raid", 00:11:50.694 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:50.694 "strip_size_kb": 64, 00:11:50.694 "state": "configuring", 00:11:50.694 "raid_level": "concat", 00:11:50.694 "superblock": true, 00:11:50.694 "num_base_bdevs": 4, 00:11:50.694 "num_base_bdevs_discovered": 2, 00:11:50.694 "num_base_bdevs_operational": 4, 00:11:50.694 "base_bdevs_list": [ 00:11:50.694 { 00:11:50.694 "name": "BaseBdev1", 00:11:50.694 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:50.694 "is_configured": true, 00:11:50.694 "data_offset": 2048, 00:11:50.694 "data_size": 63488 00:11:50.694 }, 00:11:50.694 { 00:11:50.694 "name": null, 00:11:50.694 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:50.694 "is_configured": false, 00:11:50.694 "data_offset": 0, 00:11:50.694 "data_size": 63488 00:11:50.694 }, 00:11:50.694 { 00:11:50.694 "name": null, 00:11:50.694 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:50.694 "is_configured": false, 00:11:50.694 "data_offset": 0, 00:11:50.694 "data_size": 63488 00:11:50.694 }, 00:11:50.694 { 00:11:50.694 "name": "BaseBdev4", 00:11:50.694 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:50.694 "is_configured": true, 00:11:50.694 "data_offset": 2048, 00:11:50.694 "data_size": 63488 00:11:50.694 } 00:11:50.694 ] 00:11:50.694 }' 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.694 08:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.264 [2024-12-13 08:23:03.450726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.264 "name": "Existed_Raid", 00:11:51.264 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:51.264 "strip_size_kb": 64, 00:11:51.264 "state": "configuring", 00:11:51.264 "raid_level": "concat", 00:11:51.264 "superblock": true, 00:11:51.264 "num_base_bdevs": 4, 00:11:51.264 "num_base_bdevs_discovered": 3, 00:11:51.264 "num_base_bdevs_operational": 4, 00:11:51.264 "base_bdevs_list": [ 00:11:51.264 { 00:11:51.264 "name": "BaseBdev1", 00:11:51.264 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:51.264 "is_configured": true, 00:11:51.264 "data_offset": 2048, 00:11:51.264 "data_size": 63488 00:11:51.264 }, 00:11:51.264 { 00:11:51.264 "name": null, 00:11:51.264 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:51.264 "is_configured": false, 00:11:51.264 "data_offset": 0, 00:11:51.264 "data_size": 63488 00:11:51.264 }, 00:11:51.264 { 00:11:51.264 "name": "BaseBdev3", 00:11:51.264 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:51.264 "is_configured": true, 00:11:51.264 "data_offset": 2048, 00:11:51.264 "data_size": 63488 00:11:51.264 }, 00:11:51.264 { 00:11:51.264 "name": "BaseBdev4", 00:11:51.264 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:51.264 "is_configured": true, 00:11:51.264 "data_offset": 2048, 00:11:51.264 "data_size": 63488 00:11:51.264 } 00:11:51.264 ] 00:11:51.264 }' 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.264 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.833 08:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.833 [2024-12-13 08:23:03.977927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.833 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.833 "name": "Existed_Raid", 00:11:51.833 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:51.833 "strip_size_kb": 64, 00:11:51.833 "state": "configuring", 00:11:51.833 "raid_level": "concat", 00:11:51.833 "superblock": true, 00:11:51.833 "num_base_bdevs": 4, 00:11:51.833 "num_base_bdevs_discovered": 2, 00:11:51.833 "num_base_bdevs_operational": 4, 00:11:51.833 "base_bdevs_list": [ 00:11:51.833 { 00:11:51.833 "name": null, 00:11:51.833 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:51.833 "is_configured": false, 00:11:51.833 "data_offset": 0, 00:11:51.833 "data_size": 63488 00:11:51.833 }, 00:11:51.833 { 00:11:51.833 "name": null, 00:11:51.833 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:51.833 "is_configured": false, 00:11:51.833 "data_offset": 0, 00:11:51.833 "data_size": 63488 00:11:51.833 }, 00:11:51.833 { 00:11:51.833 "name": "BaseBdev3", 00:11:51.833 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:51.833 "is_configured": true, 00:11:51.833 "data_offset": 2048, 00:11:51.833 "data_size": 63488 00:11:51.833 }, 00:11:51.833 { 00:11:51.834 "name": "BaseBdev4", 00:11:51.834 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:51.834 "is_configured": true, 00:11:51.834 "data_offset": 2048, 00:11:51.834 "data_size": 63488 00:11:51.834 } 00:11:51.834 ] 00:11:51.834 }' 00:11:51.834 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.834 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.402 [2024-12-13 08:23:04.608850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.402 "name": "Existed_Raid", 00:11:52.402 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:52.402 "strip_size_kb": 64, 00:11:52.402 "state": "configuring", 00:11:52.402 "raid_level": "concat", 00:11:52.402 "superblock": true, 00:11:52.402 "num_base_bdevs": 4, 00:11:52.402 "num_base_bdevs_discovered": 3, 00:11:52.402 "num_base_bdevs_operational": 4, 00:11:52.402 "base_bdevs_list": [ 00:11:52.402 { 00:11:52.402 "name": null, 00:11:52.402 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:52.402 "is_configured": false, 00:11:52.402 "data_offset": 0, 00:11:52.402 "data_size": 63488 00:11:52.402 }, 00:11:52.402 { 00:11:52.402 "name": "BaseBdev2", 00:11:52.402 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:52.402 "is_configured": true, 00:11:52.402 "data_offset": 2048, 00:11:52.402 "data_size": 63488 00:11:52.402 }, 00:11:52.402 { 00:11:52.402 "name": "BaseBdev3", 00:11:52.402 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:52.402 "is_configured": true, 00:11:52.402 "data_offset": 2048, 00:11:52.402 "data_size": 63488 00:11:52.402 }, 00:11:52.402 { 00:11:52.402 "name": "BaseBdev4", 00:11:52.402 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:52.402 "is_configured": true, 00:11:52.402 "data_offset": 2048, 00:11:52.402 "data_size": 63488 00:11:52.402 } 00:11:52.402 ] 00:11:52.402 }' 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.402 08:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 [2024-12-13 08:23:05.181535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:52.975 [2024-12-13 08:23:05.181905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:52.975 [2024-12-13 08:23:05.181924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:52.975 [2024-12-13 08:23:05.182270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:52.975 NewBaseBdev 00:11:52.975 [2024-12-13 08:23:05.182439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:52.975 [2024-12-13 08:23:05.182457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:52.975 [2024-12-13 08:23:05.182597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 [ 00:11:52.975 { 00:11:52.975 "name": "NewBaseBdev", 00:11:52.975 "aliases": [ 00:11:52.975 "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17" 00:11:52.975 ], 00:11:52.975 "product_name": "Malloc disk", 00:11:52.975 "block_size": 512, 00:11:52.975 "num_blocks": 65536, 00:11:52.975 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:52.975 "assigned_rate_limits": { 00:11:52.975 "rw_ios_per_sec": 0, 00:11:52.975 "rw_mbytes_per_sec": 0, 00:11:52.975 "r_mbytes_per_sec": 0, 00:11:52.975 "w_mbytes_per_sec": 0 00:11:52.975 }, 00:11:52.975 "claimed": true, 00:11:52.975 "claim_type": "exclusive_write", 00:11:52.975 "zoned": false, 00:11:52.975 "supported_io_types": { 00:11:52.975 "read": true, 00:11:52.975 "write": true, 00:11:52.975 "unmap": true, 00:11:52.975 "flush": true, 00:11:52.975 "reset": true, 00:11:52.975 "nvme_admin": false, 00:11:52.975 "nvme_io": false, 00:11:52.975 "nvme_io_md": false, 00:11:52.975 "write_zeroes": true, 00:11:52.975 "zcopy": true, 00:11:52.975 "get_zone_info": false, 00:11:52.975 "zone_management": false, 00:11:52.975 "zone_append": false, 00:11:52.975 "compare": false, 00:11:52.975 "compare_and_write": false, 00:11:52.975 "abort": true, 00:11:52.975 "seek_hole": false, 00:11:52.976 "seek_data": false, 00:11:52.976 "copy": true, 00:11:52.976 "nvme_iov_md": false 00:11:52.976 }, 00:11:52.976 "memory_domains": [ 00:11:52.976 { 00:11:52.976 "dma_device_id": "system", 00:11:52.976 "dma_device_type": 1 00:11:52.976 }, 00:11:52.976 { 00:11:52.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.976 "dma_device_type": 2 00:11:52.976 } 00:11:52.976 ], 00:11:52.976 "driver_specific": {} 00:11:52.976 } 00:11:52.976 ] 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.976 "name": "Existed_Raid", 00:11:52.976 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:52.976 "strip_size_kb": 64, 00:11:52.976 "state": "online", 00:11:52.976 "raid_level": "concat", 00:11:52.976 "superblock": true, 00:11:52.976 "num_base_bdevs": 4, 00:11:52.976 "num_base_bdevs_discovered": 4, 00:11:52.976 "num_base_bdevs_operational": 4, 00:11:52.976 "base_bdevs_list": [ 00:11:52.976 { 00:11:52.976 "name": "NewBaseBdev", 00:11:52.976 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:52.976 "is_configured": true, 00:11:52.976 "data_offset": 2048, 00:11:52.976 "data_size": 63488 00:11:52.976 }, 00:11:52.976 { 00:11:52.976 "name": "BaseBdev2", 00:11:52.976 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:52.976 "is_configured": true, 00:11:52.976 "data_offset": 2048, 00:11:52.976 "data_size": 63488 00:11:52.976 }, 00:11:52.976 { 00:11:52.976 "name": "BaseBdev3", 00:11:52.976 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:52.976 "is_configured": true, 00:11:52.976 "data_offset": 2048, 00:11:52.976 "data_size": 63488 00:11:52.976 }, 00:11:52.976 { 00:11:52.976 "name": "BaseBdev4", 00:11:52.976 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:52.976 "is_configured": true, 00:11:52.976 "data_offset": 2048, 00:11:52.976 "data_size": 63488 00:11:52.976 } 00:11:52.976 ] 00:11:52.976 }' 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.976 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.271 [2024-12-13 08:23:05.617348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.271 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.531 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.531 "name": "Existed_Raid", 00:11:53.531 "aliases": [ 00:11:53.531 "38c9595b-955f-4dd4-a5ba-3e8552135c4d" 00:11:53.531 ], 00:11:53.531 "product_name": "Raid Volume", 00:11:53.531 "block_size": 512, 00:11:53.531 "num_blocks": 253952, 00:11:53.531 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:53.531 "assigned_rate_limits": { 00:11:53.531 "rw_ios_per_sec": 0, 00:11:53.531 "rw_mbytes_per_sec": 0, 00:11:53.531 "r_mbytes_per_sec": 0, 00:11:53.531 "w_mbytes_per_sec": 0 00:11:53.531 }, 00:11:53.531 "claimed": false, 00:11:53.531 "zoned": false, 00:11:53.531 "supported_io_types": { 00:11:53.531 "read": true, 00:11:53.531 "write": true, 00:11:53.531 "unmap": true, 00:11:53.531 "flush": true, 00:11:53.531 "reset": true, 00:11:53.531 "nvme_admin": false, 00:11:53.531 "nvme_io": false, 00:11:53.531 "nvme_io_md": false, 00:11:53.532 "write_zeroes": true, 00:11:53.532 "zcopy": false, 00:11:53.532 "get_zone_info": false, 00:11:53.532 "zone_management": false, 00:11:53.532 "zone_append": false, 00:11:53.532 "compare": false, 00:11:53.532 "compare_and_write": false, 00:11:53.532 "abort": false, 00:11:53.532 "seek_hole": false, 00:11:53.532 "seek_data": false, 00:11:53.532 "copy": false, 00:11:53.532 "nvme_iov_md": false 00:11:53.532 }, 00:11:53.532 "memory_domains": [ 00:11:53.532 { 00:11:53.532 "dma_device_id": "system", 00:11:53.532 "dma_device_type": 1 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.532 "dma_device_type": 2 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "dma_device_id": "system", 00:11:53.532 "dma_device_type": 1 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.532 "dma_device_type": 2 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "dma_device_id": "system", 00:11:53.532 "dma_device_type": 1 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.532 "dma_device_type": 2 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "dma_device_id": "system", 00:11:53.532 "dma_device_type": 1 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.532 "dma_device_type": 2 00:11:53.532 } 00:11:53.532 ], 00:11:53.532 "driver_specific": { 00:11:53.532 "raid": { 00:11:53.532 "uuid": "38c9595b-955f-4dd4-a5ba-3e8552135c4d", 00:11:53.532 "strip_size_kb": 64, 00:11:53.532 "state": "online", 00:11:53.532 "raid_level": "concat", 00:11:53.532 "superblock": true, 00:11:53.532 "num_base_bdevs": 4, 00:11:53.532 "num_base_bdevs_discovered": 4, 00:11:53.532 "num_base_bdevs_operational": 4, 00:11:53.532 "base_bdevs_list": [ 00:11:53.532 { 00:11:53.532 "name": "NewBaseBdev", 00:11:53.532 "uuid": "2fecc87b-0cd4-42ff-bc9d-6b1c3de09f17", 00:11:53.532 "is_configured": true, 00:11:53.532 "data_offset": 2048, 00:11:53.532 "data_size": 63488 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "name": "BaseBdev2", 00:11:53.532 "uuid": "67848015-ba53-428f-9e6f-11b3d9cdd0cb", 00:11:53.532 "is_configured": true, 00:11:53.532 "data_offset": 2048, 00:11:53.532 "data_size": 63488 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "name": "BaseBdev3", 00:11:53.532 "uuid": "121a2e39-c3a7-4133-b493-73f88206e091", 00:11:53.532 "is_configured": true, 00:11:53.532 "data_offset": 2048, 00:11:53.532 "data_size": 63488 00:11:53.532 }, 00:11:53.532 { 00:11:53.532 "name": "BaseBdev4", 00:11:53.532 "uuid": "ae9b6f8c-da49-4333-b042-0ba4cb20b333", 00:11:53.532 "is_configured": true, 00:11:53.532 "data_offset": 2048, 00:11:53.532 "data_size": 63488 00:11:53.532 } 00:11:53.532 ] 00:11:53.532 } 00:11:53.532 } 00:11:53.532 }' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:53.532 BaseBdev2 00:11:53.532 BaseBdev3 00:11:53.532 BaseBdev4' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.532 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.792 [2024-12-13 08:23:05.932377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.792 [2024-12-13 08:23:05.932413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.792 [2024-12-13 08:23:05.932508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.792 [2024-12-13 08:23:05.932590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.792 [2024-12-13 08:23:05.932601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72137 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72137 ']' 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72137 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72137 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.792 killing process with pid 72137 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72137' 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72137 00:11:53.792 [2024-12-13 08:23:05.977840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.792 08:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72137 00:11:54.361 [2024-12-13 08:23:06.415798] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.740 08:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:55.740 00:11:55.740 real 0m11.961s 00:11:55.740 user 0m18.848s 00:11:55.740 sys 0m2.071s 00:11:55.740 08:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.740 08:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.740 ************************************ 00:11:55.740 END TEST raid_state_function_test_sb 00:11:55.740 ************************************ 00:11:55.740 08:23:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:55.740 08:23:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.740 08:23:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.740 08:23:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.740 ************************************ 00:11:55.740 START TEST raid_superblock_test 00:11:55.740 ************************************ 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72807 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72807 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72807 ']' 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.740 08:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.740 [2024-12-13 08:23:07.838810] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:55.740 [2024-12-13 08:23:07.838963] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72807 ] 00:11:55.740 [2024-12-13 08:23:08.008600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.999 [2024-12-13 08:23:08.133423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.999 [2024-12-13 08:23:08.351066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.999 [2024-12-13 08:23:08.351118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.568 malloc1 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.568 [2024-12-13 08:23:08.816187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.568 [2024-12-13 08:23:08.816255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.568 [2024-12-13 08:23:08.816280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:56.568 [2024-12-13 08:23:08.816290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.568 [2024-12-13 08:23:08.818770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.568 [2024-12-13 08:23:08.818814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.568 pt1 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.568 malloc2 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.568 [2024-12-13 08:23:08.879368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.568 [2024-12-13 08:23:08.879482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.568 [2024-12-13 08:23:08.879537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:56.568 [2024-12-13 08:23:08.879573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.568 [2024-12-13 08:23:08.882049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.568 [2024-12-13 08:23:08.882148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.568 pt2 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.568 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.827 malloc3 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.827 [2024-12-13 08:23:08.959570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:56.827 [2024-12-13 08:23:08.959675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.827 [2024-12-13 08:23:08.959724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:56.827 [2024-12-13 08:23:08.959767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.827 [2024-12-13 08:23:08.962260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.827 [2024-12-13 08:23:08.962334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:56.827 pt3 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.827 08:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.827 malloc4 00:11:56.827 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.827 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.828 [2024-12-13 08:23:09.018736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:56.828 [2024-12-13 08:23:09.018852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.828 [2024-12-13 08:23:09.018880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:56.828 [2024-12-13 08:23:09.018891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.828 [2024-12-13 08:23:09.021324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.828 [2024-12-13 08:23:09.021364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:56.828 pt4 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.828 [2024-12-13 08:23:09.030743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:56.828 [2024-12-13 08:23:09.032884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.828 [2024-12-13 08:23:09.033026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:56.828 [2024-12-13 08:23:09.033137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:56.828 [2024-12-13 08:23:09.033391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:56.828 [2024-12-13 08:23:09.033442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:56.828 [2024-12-13 08:23:09.033752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:56.828 [2024-12-13 08:23:09.033979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:56.828 [2024-12-13 08:23:09.034032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:56.828 [2024-12-13 08:23:09.034273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.828 "name": "raid_bdev1", 00:11:56.828 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:56.828 "strip_size_kb": 64, 00:11:56.828 "state": "online", 00:11:56.828 "raid_level": "concat", 00:11:56.828 "superblock": true, 00:11:56.828 "num_base_bdevs": 4, 00:11:56.828 "num_base_bdevs_discovered": 4, 00:11:56.828 "num_base_bdevs_operational": 4, 00:11:56.828 "base_bdevs_list": [ 00:11:56.828 { 00:11:56.828 "name": "pt1", 00:11:56.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.828 "is_configured": true, 00:11:56.828 "data_offset": 2048, 00:11:56.828 "data_size": 63488 00:11:56.828 }, 00:11:56.828 { 00:11:56.828 "name": "pt2", 00:11:56.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.828 "is_configured": true, 00:11:56.828 "data_offset": 2048, 00:11:56.828 "data_size": 63488 00:11:56.828 }, 00:11:56.828 { 00:11:56.828 "name": "pt3", 00:11:56.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.828 "is_configured": true, 00:11:56.828 "data_offset": 2048, 00:11:56.828 "data_size": 63488 00:11:56.828 }, 00:11:56.828 { 00:11:56.828 "name": "pt4", 00:11:56.828 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.828 "is_configured": true, 00:11:56.828 "data_offset": 2048, 00:11:56.828 "data_size": 63488 00:11:56.828 } 00:11:56.828 ] 00:11:56.828 }' 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.828 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.396 [2024-12-13 08:23:09.486409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.396 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.396 "name": "raid_bdev1", 00:11:57.396 "aliases": [ 00:11:57.396 "6a39435e-df2a-49fe-8eb3-c047042eb598" 00:11:57.396 ], 00:11:57.396 "product_name": "Raid Volume", 00:11:57.396 "block_size": 512, 00:11:57.396 "num_blocks": 253952, 00:11:57.396 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:57.396 "assigned_rate_limits": { 00:11:57.396 "rw_ios_per_sec": 0, 00:11:57.396 "rw_mbytes_per_sec": 0, 00:11:57.396 "r_mbytes_per_sec": 0, 00:11:57.396 "w_mbytes_per_sec": 0 00:11:57.396 }, 00:11:57.396 "claimed": false, 00:11:57.396 "zoned": false, 00:11:57.396 "supported_io_types": { 00:11:57.396 "read": true, 00:11:57.397 "write": true, 00:11:57.397 "unmap": true, 00:11:57.397 "flush": true, 00:11:57.397 "reset": true, 00:11:57.397 "nvme_admin": false, 00:11:57.397 "nvme_io": false, 00:11:57.397 "nvme_io_md": false, 00:11:57.397 "write_zeroes": true, 00:11:57.397 "zcopy": false, 00:11:57.397 "get_zone_info": false, 00:11:57.397 "zone_management": false, 00:11:57.397 "zone_append": false, 00:11:57.397 "compare": false, 00:11:57.397 "compare_and_write": false, 00:11:57.397 "abort": false, 00:11:57.397 "seek_hole": false, 00:11:57.397 "seek_data": false, 00:11:57.397 "copy": false, 00:11:57.397 "nvme_iov_md": false 00:11:57.397 }, 00:11:57.397 "memory_domains": [ 00:11:57.397 { 00:11:57.397 "dma_device_id": "system", 00:11:57.397 "dma_device_type": 1 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.397 "dma_device_type": 2 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "dma_device_id": "system", 00:11:57.397 "dma_device_type": 1 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.397 "dma_device_type": 2 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "dma_device_id": "system", 00:11:57.397 "dma_device_type": 1 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.397 "dma_device_type": 2 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "dma_device_id": "system", 00:11:57.397 "dma_device_type": 1 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.397 "dma_device_type": 2 00:11:57.397 } 00:11:57.397 ], 00:11:57.397 "driver_specific": { 00:11:57.397 "raid": { 00:11:57.397 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:57.397 "strip_size_kb": 64, 00:11:57.397 "state": "online", 00:11:57.397 "raid_level": "concat", 00:11:57.397 "superblock": true, 00:11:57.397 "num_base_bdevs": 4, 00:11:57.397 "num_base_bdevs_discovered": 4, 00:11:57.397 "num_base_bdevs_operational": 4, 00:11:57.397 "base_bdevs_list": [ 00:11:57.397 { 00:11:57.397 "name": "pt1", 00:11:57.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.397 "is_configured": true, 00:11:57.397 "data_offset": 2048, 00:11:57.397 "data_size": 63488 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "name": "pt2", 00:11:57.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.397 "is_configured": true, 00:11:57.397 "data_offset": 2048, 00:11:57.397 "data_size": 63488 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "name": "pt3", 00:11:57.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.397 "is_configured": true, 00:11:57.397 "data_offset": 2048, 00:11:57.397 "data_size": 63488 00:11:57.397 }, 00:11:57.397 { 00:11:57.397 "name": "pt4", 00:11:57.397 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.397 "is_configured": true, 00:11:57.397 "data_offset": 2048, 00:11:57.397 "data_size": 63488 00:11:57.397 } 00:11:57.397 ] 00:11:57.397 } 00:11:57.397 } 00:11:57.397 }' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:57.397 pt2 00:11:57.397 pt3 00:11:57.397 pt4' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.397 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.657 [2024-12-13 08:23:09.817787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6a39435e-df2a-49fe-8eb3-c047042eb598 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6a39435e-df2a-49fe-8eb3-c047042eb598 ']' 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.657 [2024-12-13 08:23:09.853377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.657 [2024-12-13 08:23:09.853451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.657 [2024-12-13 08:23:09.853582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.657 [2024-12-13 08:23:09.853704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.657 [2024-12-13 08:23:09.853760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.657 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:57.658 08:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.658 [2024-12-13 08:23:10.009180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:57.658 [2024-12-13 08:23:10.011418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:57.658 [2024-12-13 08:23:10.011522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:57.658 [2024-12-13 08:23:10.011597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:57.658 [2024-12-13 08:23:10.011694] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:57.658 [2024-12-13 08:23:10.011796] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:57.658 [2024-12-13 08:23:10.011871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:57.658 [2024-12-13 08:23:10.011944] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:57.658 [2024-12-13 08:23:10.012007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.658 [2024-12-13 08:23:10.012044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:57.658 request: 00:11:57.658 { 00:11:57.658 "name": "raid_bdev1", 00:11:57.658 "raid_level": "concat", 00:11:57.658 "base_bdevs": [ 00:11:57.658 "malloc1", 00:11:57.658 "malloc2", 00:11:57.658 "malloc3", 00:11:57.658 "malloc4" 00:11:57.658 ], 00:11:57.658 "strip_size_kb": 64, 00:11:57.658 "superblock": false, 00:11:57.658 "method": "bdev_raid_create", 00:11:57.658 "req_id": 1 00:11:57.658 } 00:11:57.658 Got JSON-RPC error response 00:11:57.658 response: 00:11:57.658 { 00:11:57.658 "code": -17, 00:11:57.658 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:57.658 } 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 [2024-12-13 08:23:10.069022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.918 [2024-12-13 08:23:10.069094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.918 [2024-12-13 08:23:10.069125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:57.918 [2024-12-13 08:23:10.069137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.918 [2024-12-13 08:23:10.071645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.918 [2024-12-13 08:23:10.071694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.918 [2024-12-13 08:23:10.071794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:57.918 [2024-12-13 08:23:10.071856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.918 pt1 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.918 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.919 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.919 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.919 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.919 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.919 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.919 "name": "raid_bdev1", 00:11:57.919 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:57.919 "strip_size_kb": 64, 00:11:57.919 "state": "configuring", 00:11:57.919 "raid_level": "concat", 00:11:57.919 "superblock": true, 00:11:57.919 "num_base_bdevs": 4, 00:11:57.919 "num_base_bdevs_discovered": 1, 00:11:57.919 "num_base_bdevs_operational": 4, 00:11:57.919 "base_bdevs_list": [ 00:11:57.919 { 00:11:57.919 "name": "pt1", 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.919 "is_configured": true, 00:11:57.919 "data_offset": 2048, 00:11:57.919 "data_size": 63488 00:11:57.919 }, 00:11:57.919 { 00:11:57.919 "name": null, 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.919 "is_configured": false, 00:11:57.919 "data_offset": 2048, 00:11:57.919 "data_size": 63488 00:11:57.919 }, 00:11:57.919 { 00:11:57.919 "name": null, 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.919 "is_configured": false, 00:11:57.919 "data_offset": 2048, 00:11:57.919 "data_size": 63488 00:11:57.919 }, 00:11:57.919 { 00:11:57.919 "name": null, 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.919 "is_configured": false, 00:11:57.919 "data_offset": 2048, 00:11:57.919 "data_size": 63488 00:11:57.919 } 00:11:57.919 ] 00:11:57.919 }' 00:11:57.919 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.919 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.179 [2024-12-13 08:23:10.536267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.179 [2024-12-13 08:23:10.536414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.179 [2024-12-13 08:23:10.536456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:58.179 [2024-12-13 08:23:10.536490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.179 [2024-12-13 08:23:10.536998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.179 [2024-12-13 08:23:10.537067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.179 [2024-12-13 08:23:10.537209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.179 [2024-12-13 08:23:10.537270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.179 pt2 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.179 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.439 [2024-12-13 08:23:10.548239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.439 "name": "raid_bdev1", 00:11:58.439 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:58.439 "strip_size_kb": 64, 00:11:58.439 "state": "configuring", 00:11:58.439 "raid_level": "concat", 00:11:58.439 "superblock": true, 00:11:58.439 "num_base_bdevs": 4, 00:11:58.439 "num_base_bdevs_discovered": 1, 00:11:58.439 "num_base_bdevs_operational": 4, 00:11:58.439 "base_bdevs_list": [ 00:11:58.439 { 00:11:58.439 "name": "pt1", 00:11:58.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.439 "is_configured": true, 00:11:58.439 "data_offset": 2048, 00:11:58.439 "data_size": 63488 00:11:58.439 }, 00:11:58.439 { 00:11:58.439 "name": null, 00:11:58.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.439 "is_configured": false, 00:11:58.439 "data_offset": 0, 00:11:58.439 "data_size": 63488 00:11:58.439 }, 00:11:58.439 { 00:11:58.439 "name": null, 00:11:58.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.439 "is_configured": false, 00:11:58.439 "data_offset": 2048, 00:11:58.439 "data_size": 63488 00:11:58.439 }, 00:11:58.439 { 00:11:58.439 "name": null, 00:11:58.439 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:58.439 "is_configured": false, 00:11:58.439 "data_offset": 2048, 00:11:58.439 "data_size": 63488 00:11:58.439 } 00:11:58.439 ] 00:11:58.439 }' 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.439 08:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.699 [2024-12-13 08:23:11.027435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.699 [2024-12-13 08:23:11.027555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.699 [2024-12-13 08:23:11.027607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:58.699 [2024-12-13 08:23:11.027648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.699 [2024-12-13 08:23:11.028209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.699 [2024-12-13 08:23:11.028273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.699 [2024-12-13 08:23:11.028404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.699 [2024-12-13 08:23:11.028461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.699 pt2 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.699 [2024-12-13 08:23:11.039384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:58.699 [2024-12-13 08:23:11.039480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.699 [2024-12-13 08:23:11.039521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:58.699 [2024-12-13 08:23:11.039552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.699 [2024-12-13 08:23:11.040018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.699 [2024-12-13 08:23:11.040085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:58.699 [2024-12-13 08:23:11.040218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:58.699 [2024-12-13 08:23:11.040284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:58.699 pt3 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.699 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.699 [2024-12-13 08:23:11.051344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:58.699 [2024-12-13 08:23:11.051430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.699 [2024-12-13 08:23:11.051469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:58.700 [2024-12-13 08:23:11.051501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.700 [2024-12-13 08:23:11.051982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.700 [2024-12-13 08:23:11.052048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:58.700 [2024-12-13 08:23:11.052164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:58.700 [2024-12-13 08:23:11.052223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:58.700 [2024-12-13 08:23:11.052405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.700 [2024-12-13 08:23:11.052448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:58.700 [2024-12-13 08:23:11.052736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:58.700 [2024-12-13 08:23:11.052941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.700 [2024-12-13 08:23:11.052993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:58.700 [2024-12-13 08:23:11.053188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.700 pt4 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.700 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.960 "name": "raid_bdev1", 00:11:58.960 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:58.960 "strip_size_kb": 64, 00:11:58.960 "state": "online", 00:11:58.960 "raid_level": "concat", 00:11:58.960 "superblock": true, 00:11:58.960 "num_base_bdevs": 4, 00:11:58.960 "num_base_bdevs_discovered": 4, 00:11:58.960 "num_base_bdevs_operational": 4, 00:11:58.960 "base_bdevs_list": [ 00:11:58.960 { 00:11:58.960 "name": "pt1", 00:11:58.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.960 "is_configured": true, 00:11:58.960 "data_offset": 2048, 00:11:58.960 "data_size": 63488 00:11:58.960 }, 00:11:58.960 { 00:11:58.960 "name": "pt2", 00:11:58.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.960 "is_configured": true, 00:11:58.960 "data_offset": 2048, 00:11:58.960 "data_size": 63488 00:11:58.960 }, 00:11:58.960 { 00:11:58.960 "name": "pt3", 00:11:58.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.960 "is_configured": true, 00:11:58.960 "data_offset": 2048, 00:11:58.960 "data_size": 63488 00:11:58.960 }, 00:11:58.960 { 00:11:58.960 "name": "pt4", 00:11:58.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:58.960 "is_configured": true, 00:11:58.960 "data_offset": 2048, 00:11:58.960 "data_size": 63488 00:11:58.960 } 00:11:58.960 ] 00:11:58.960 }' 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.960 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.265 [2024-12-13 08:23:11.499070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.265 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.265 "name": "raid_bdev1", 00:11:59.265 "aliases": [ 00:11:59.265 "6a39435e-df2a-49fe-8eb3-c047042eb598" 00:11:59.265 ], 00:11:59.265 "product_name": "Raid Volume", 00:11:59.265 "block_size": 512, 00:11:59.265 "num_blocks": 253952, 00:11:59.265 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:59.265 "assigned_rate_limits": { 00:11:59.265 "rw_ios_per_sec": 0, 00:11:59.266 "rw_mbytes_per_sec": 0, 00:11:59.266 "r_mbytes_per_sec": 0, 00:11:59.266 "w_mbytes_per_sec": 0 00:11:59.266 }, 00:11:59.266 "claimed": false, 00:11:59.266 "zoned": false, 00:11:59.266 "supported_io_types": { 00:11:59.266 "read": true, 00:11:59.266 "write": true, 00:11:59.266 "unmap": true, 00:11:59.266 "flush": true, 00:11:59.266 "reset": true, 00:11:59.266 "nvme_admin": false, 00:11:59.266 "nvme_io": false, 00:11:59.266 "nvme_io_md": false, 00:11:59.266 "write_zeroes": true, 00:11:59.266 "zcopy": false, 00:11:59.266 "get_zone_info": false, 00:11:59.266 "zone_management": false, 00:11:59.266 "zone_append": false, 00:11:59.266 "compare": false, 00:11:59.266 "compare_and_write": false, 00:11:59.266 "abort": false, 00:11:59.266 "seek_hole": false, 00:11:59.266 "seek_data": false, 00:11:59.266 "copy": false, 00:11:59.266 "nvme_iov_md": false 00:11:59.266 }, 00:11:59.266 "memory_domains": [ 00:11:59.266 { 00:11:59.266 "dma_device_id": "system", 00:11:59.266 "dma_device_type": 1 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.266 "dma_device_type": 2 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "dma_device_id": "system", 00:11:59.266 "dma_device_type": 1 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.266 "dma_device_type": 2 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "dma_device_id": "system", 00:11:59.266 "dma_device_type": 1 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.266 "dma_device_type": 2 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "dma_device_id": "system", 00:11:59.266 "dma_device_type": 1 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.266 "dma_device_type": 2 00:11:59.266 } 00:11:59.266 ], 00:11:59.266 "driver_specific": { 00:11:59.266 "raid": { 00:11:59.266 "uuid": "6a39435e-df2a-49fe-8eb3-c047042eb598", 00:11:59.266 "strip_size_kb": 64, 00:11:59.266 "state": "online", 00:11:59.266 "raid_level": "concat", 00:11:59.266 "superblock": true, 00:11:59.266 "num_base_bdevs": 4, 00:11:59.266 "num_base_bdevs_discovered": 4, 00:11:59.266 "num_base_bdevs_operational": 4, 00:11:59.266 "base_bdevs_list": [ 00:11:59.266 { 00:11:59.266 "name": "pt1", 00:11:59.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.266 "is_configured": true, 00:11:59.266 "data_offset": 2048, 00:11:59.266 "data_size": 63488 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "name": "pt2", 00:11:59.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.266 "is_configured": true, 00:11:59.266 "data_offset": 2048, 00:11:59.266 "data_size": 63488 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "name": "pt3", 00:11:59.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.266 "is_configured": true, 00:11:59.266 "data_offset": 2048, 00:11:59.266 "data_size": 63488 00:11:59.266 }, 00:11:59.266 { 00:11:59.266 "name": "pt4", 00:11:59.266 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.266 "is_configured": true, 00:11:59.266 "data_offset": 2048, 00:11:59.266 "data_size": 63488 00:11:59.266 } 00:11:59.266 ] 00:11:59.266 } 00:11:59.266 } 00:11:59.266 }' 00:11:59.266 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.266 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:59.266 pt2 00:11:59.266 pt3 00:11:59.266 pt4' 00:11:59.266 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 [2024-12-13 08:23:11.854405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.526 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6a39435e-df2a-49fe-8eb3-c047042eb598 '!=' 6a39435e-df2a-49fe-8eb3-c047042eb598 ']' 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72807 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72807 ']' 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72807 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72807 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72807' 00:11:59.786 killing process with pid 72807 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72807 00:11:59.786 [2024-12-13 08:23:11.947424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.786 [2024-12-13 08:23:11.947525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.786 08:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72807 00:11:59.786 [2024-12-13 08:23:11.947613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.786 [2024-12-13 08:23:11.947625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:00.045 [2024-12-13 08:23:12.389987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.422 08:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:01.422 00:12:01.422 real 0m5.885s 00:12:01.422 user 0m8.403s 00:12:01.422 sys 0m1.010s 00:12:01.422 08:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.422 ************************************ 00:12:01.422 END TEST raid_superblock_test 00:12:01.422 ************************************ 00:12:01.422 08:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 08:23:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:01.422 08:23:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:01.422 08:23:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.422 08:23:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 ************************************ 00:12:01.422 START TEST raid_read_error_test 00:12:01.422 ************************************ 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BF2XNCVVzA 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73072 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73072 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73072 ']' 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.422 08:23:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.423 08:23:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.423 08:23:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.697 [2024-12-13 08:23:13.807012] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:01.697 [2024-12-13 08:23:13.807263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73072 ] 00:12:01.697 [2024-12-13 08:23:13.999862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.956 [2024-12-13 08:23:14.130116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.215 [2024-12-13 08:23:14.344126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.215 [2024-12-13 08:23:14.344161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 BaseBdev1_malloc 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 true 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 [2024-12-13 08:23:14.739432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:02.475 [2024-12-13 08:23:14.739490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.475 [2024-12-13 08:23:14.739529] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:02.475 [2024-12-13 08:23:14.739541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.475 [2024-12-13 08:23:14.741923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.475 [2024-12-13 08:23:14.741970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:02.475 BaseBdev1 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 BaseBdev2_malloc 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 true 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 [2024-12-13 08:23:14.807349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:02.475 [2024-12-13 08:23:14.807409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.475 [2024-12-13 08:23:14.807428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:02.475 [2024-12-13 08:23:14.807439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.475 [2024-12-13 08:23:14.809786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.475 [2024-12-13 08:23:14.809871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:02.475 BaseBdev2 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.475 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.736 BaseBdev3_malloc 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.736 true 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.736 [2024-12-13 08:23:14.890956] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:02.736 [2024-12-13 08:23:14.891021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.736 [2024-12-13 08:23:14.891041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:02.736 [2024-12-13 08:23:14.891053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.736 [2024-12-13 08:23:14.893358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.736 [2024-12-13 08:23:14.893411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:02.736 BaseBdev3 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.736 BaseBdev4_malloc 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.736 true 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.736 [2024-12-13 08:23:14.959270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:02.736 [2024-12-13 08:23:14.959330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.736 [2024-12-13 08:23:14.959351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:02.736 [2024-12-13 08:23:14.959363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.736 [2024-12-13 08:23:14.961683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.736 [2024-12-13 08:23:14.961794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:02.736 BaseBdev4 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.736 [2024-12-13 08:23:14.971335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.736 [2024-12-13 08:23:14.973437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.736 [2024-12-13 08:23:14.973518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.736 [2024-12-13 08:23:14.973586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:02.736 [2024-12-13 08:23:14.973825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:02.736 [2024-12-13 08:23:14.973842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:02.736 [2024-12-13 08:23:14.974126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:02.736 [2024-12-13 08:23:14.974309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:02.736 [2024-12-13 08:23:14.974322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:02.736 [2024-12-13 08:23:14.974510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.736 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.737 08:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.737 08:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.737 08:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.737 "name": "raid_bdev1", 00:12:02.737 "uuid": "afb78a7e-1902-48c5-9274-467dbc0c2c57", 00:12:02.737 "strip_size_kb": 64, 00:12:02.737 "state": "online", 00:12:02.737 "raid_level": "concat", 00:12:02.737 "superblock": true, 00:12:02.737 "num_base_bdevs": 4, 00:12:02.737 "num_base_bdevs_discovered": 4, 00:12:02.737 "num_base_bdevs_operational": 4, 00:12:02.737 "base_bdevs_list": [ 00:12:02.737 { 00:12:02.737 "name": "BaseBdev1", 00:12:02.737 "uuid": "86d7d3d3-233c-5ea2-b531-ffdc4c9ea368", 00:12:02.737 "is_configured": true, 00:12:02.737 "data_offset": 2048, 00:12:02.737 "data_size": 63488 00:12:02.737 }, 00:12:02.737 { 00:12:02.737 "name": "BaseBdev2", 00:12:02.737 "uuid": "e99d66e8-a698-5b4b-9a1e-0bd373d2e36a", 00:12:02.737 "is_configured": true, 00:12:02.737 "data_offset": 2048, 00:12:02.737 "data_size": 63488 00:12:02.737 }, 00:12:02.737 { 00:12:02.737 "name": "BaseBdev3", 00:12:02.737 "uuid": "1914bba1-3e18-53f5-91a4-e7bdcfaf89d8", 00:12:02.737 "is_configured": true, 00:12:02.737 "data_offset": 2048, 00:12:02.737 "data_size": 63488 00:12:02.737 }, 00:12:02.737 { 00:12:02.737 "name": "BaseBdev4", 00:12:02.737 "uuid": "6a3d3665-8f96-5518-ba95-32538d1ed532", 00:12:02.737 "is_configured": true, 00:12:02.737 "data_offset": 2048, 00:12:02.737 "data_size": 63488 00:12:02.737 } 00:12:02.737 ] 00:12:02.737 }' 00:12:02.737 08:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.737 08:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 08:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:03.303 08:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:03.303 [2024-12-13 08:23:15.551583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.242 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.242 "name": "raid_bdev1", 00:12:04.242 "uuid": "afb78a7e-1902-48c5-9274-467dbc0c2c57", 00:12:04.242 "strip_size_kb": 64, 00:12:04.242 "state": "online", 00:12:04.242 "raid_level": "concat", 00:12:04.242 "superblock": true, 00:12:04.242 "num_base_bdevs": 4, 00:12:04.242 "num_base_bdevs_discovered": 4, 00:12:04.242 "num_base_bdevs_operational": 4, 00:12:04.243 "base_bdevs_list": [ 00:12:04.243 { 00:12:04.243 "name": "BaseBdev1", 00:12:04.243 "uuid": "86d7d3d3-233c-5ea2-b531-ffdc4c9ea368", 00:12:04.243 "is_configured": true, 00:12:04.243 "data_offset": 2048, 00:12:04.243 "data_size": 63488 00:12:04.243 }, 00:12:04.243 { 00:12:04.243 "name": "BaseBdev2", 00:12:04.243 "uuid": "e99d66e8-a698-5b4b-9a1e-0bd373d2e36a", 00:12:04.243 "is_configured": true, 00:12:04.243 "data_offset": 2048, 00:12:04.243 "data_size": 63488 00:12:04.243 }, 00:12:04.243 { 00:12:04.243 "name": "BaseBdev3", 00:12:04.243 "uuid": "1914bba1-3e18-53f5-91a4-e7bdcfaf89d8", 00:12:04.243 "is_configured": true, 00:12:04.243 "data_offset": 2048, 00:12:04.243 "data_size": 63488 00:12:04.243 }, 00:12:04.243 { 00:12:04.243 "name": "BaseBdev4", 00:12:04.243 "uuid": "6a3d3665-8f96-5518-ba95-32538d1ed532", 00:12:04.243 "is_configured": true, 00:12:04.243 "data_offset": 2048, 00:12:04.243 "data_size": 63488 00:12:04.243 } 00:12:04.243 ] 00:12:04.243 }' 00:12:04.243 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.243 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.829 [2024-12-13 08:23:16.912125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.829 [2024-12-13 08:23:16.912158] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.829 [2024-12-13 08:23:16.914892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.829 [2024-12-13 08:23:16.914983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.829 [2024-12-13 08:23:16.915031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.829 [2024-12-13 08:23:16.915046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:04.829 { 00:12:04.829 "results": [ 00:12:04.829 { 00:12:04.829 "job": "raid_bdev1", 00:12:04.829 "core_mask": "0x1", 00:12:04.829 "workload": "randrw", 00:12:04.829 "percentage": 50, 00:12:04.829 "status": "finished", 00:12:04.829 "queue_depth": 1, 00:12:04.829 "io_size": 131072, 00:12:04.829 "runtime": 1.361141, 00:12:04.829 "iops": 14371.766040402868, 00:12:04.829 "mibps": 1796.4707550503585, 00:12:04.829 "io_failed": 1, 00:12:04.829 "io_timeout": 0, 00:12:04.829 "avg_latency_us": 96.3466585058194, 00:12:04.829 "min_latency_us": 28.17117903930131, 00:12:04.829 "max_latency_us": 1645.5545851528384 00:12:04.829 } 00:12:04.829 ], 00:12:04.829 "core_count": 1 00:12:04.829 } 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73072 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73072 ']' 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73072 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73072 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73072' 00:12:04.829 killing process with pid 73072 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73072 00:12:04.829 [2024-12-13 08:23:16.954561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.829 08:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73072 00:12:05.141 [2024-12-13 08:23:17.302297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BF2XNCVVzA 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:06.517 00:12:06.517 real 0m4.894s 00:12:06.517 user 0m5.775s 00:12:06.517 sys 0m0.612s 00:12:06.517 ************************************ 00:12:06.517 END TEST raid_read_error_test 00:12:06.517 ************************************ 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.517 08:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.517 08:23:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:06.517 08:23:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:06.517 08:23:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.517 08:23:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.517 ************************************ 00:12:06.517 START TEST raid_write_error_test 00:12:06.517 ************************************ 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v67DMkXLCu 00:12:06.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73218 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73218 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73218 ']' 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.517 08:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.517 [2024-12-13 08:23:18.771150] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:06.518 [2024-12-13 08:23:18.771281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:12:06.776 [2024-12-13 08:23:18.947267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.776 [2024-12-13 08:23:19.069925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.035 [2024-12-13 08:23:19.286851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.035 [2024-12-13 08:23:19.287003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.293 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.293 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:07.293 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:07.293 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:07.293 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.293 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 BaseBdev1_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 true 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 [2024-12-13 08:23:19.680161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:07.552 [2024-12-13 08:23:19.680277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.552 [2024-12-13 08:23:19.680302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:07.552 [2024-12-13 08:23:19.680314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.552 [2024-12-13 08:23:19.682421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.552 [2024-12-13 08:23:19.682464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:07.552 BaseBdev1 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 BaseBdev2_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 true 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 [2024-12-13 08:23:19.747224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:07.552 [2024-12-13 08:23:19.747286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.552 [2024-12-13 08:23:19.747306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:07.552 [2024-12-13 08:23:19.747318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.552 [2024-12-13 08:23:19.749630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.552 [2024-12-13 08:23:19.749736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:07.552 BaseBdev2 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 BaseBdev3_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.552 true 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:07.552 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.553 [2024-12-13 08:23:19.828383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:07.553 [2024-12-13 08:23:19.828483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.553 [2024-12-13 08:23:19.828521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:07.553 [2024-12-13 08:23:19.828553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.553 [2024-12-13 08:23:19.830697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.553 [2024-12-13 08:23:19.830776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:07.553 BaseBdev3 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.553 BaseBdev4_malloc 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.553 true 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.553 [2024-12-13 08:23:19.900416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:07.553 [2024-12-13 08:23:19.900541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.553 [2024-12-13 08:23:19.900591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:07.553 [2024-12-13 08:23:19.900647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.553 [2024-12-13 08:23:19.903422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.553 [2024-12-13 08:23:19.903528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:07.553 BaseBdev4 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.553 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.553 [2024-12-13 08:23:19.912472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.553 [2024-12-13 08:23:19.914564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.553 [2024-12-13 08:23:19.914704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.553 [2024-12-13 08:23:19.914843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.553 [2024-12-13 08:23:19.915205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:07.553 [2024-12-13 08:23:19.915268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:07.553 [2024-12-13 08:23:19.915608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:07.811 [2024-12-13 08:23:19.915858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:07.811 [2024-12-13 08:23:19.915878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:07.811 [2024-12-13 08:23:19.916060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.811 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.811 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:07.811 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.812 "name": "raid_bdev1", 00:12:07.812 "uuid": "2e386511-1571-4b6b-878a-dbaf02954125", 00:12:07.812 "strip_size_kb": 64, 00:12:07.812 "state": "online", 00:12:07.812 "raid_level": "concat", 00:12:07.812 "superblock": true, 00:12:07.812 "num_base_bdevs": 4, 00:12:07.812 "num_base_bdevs_discovered": 4, 00:12:07.812 "num_base_bdevs_operational": 4, 00:12:07.812 "base_bdevs_list": [ 00:12:07.812 { 00:12:07.812 "name": "BaseBdev1", 00:12:07.812 "uuid": "c5899dce-0f60-59de-842c-d98531228ac1", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 }, 00:12:07.812 { 00:12:07.812 "name": "BaseBdev2", 00:12:07.812 "uuid": "2264b471-1cc7-5aa8-b307-6b3879e843be", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 }, 00:12:07.812 { 00:12:07.812 "name": "BaseBdev3", 00:12:07.812 "uuid": "9c288941-3ee3-51e0-b03d-b0868de835a2", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 }, 00:12:07.812 { 00:12:07.812 "name": "BaseBdev4", 00:12:07.812 "uuid": "740a5f7c-9aa9-5103-a6ce-9576592123f2", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 } 00:12:07.812 ] 00:12:07.812 }' 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.812 08:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.070 08:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:08.070 08:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:08.330 [2024-12-13 08:23:20.492936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.266 "name": "raid_bdev1", 00:12:09.266 "uuid": "2e386511-1571-4b6b-878a-dbaf02954125", 00:12:09.266 "strip_size_kb": 64, 00:12:09.266 "state": "online", 00:12:09.266 "raid_level": "concat", 00:12:09.266 "superblock": true, 00:12:09.266 "num_base_bdevs": 4, 00:12:09.266 "num_base_bdevs_discovered": 4, 00:12:09.266 "num_base_bdevs_operational": 4, 00:12:09.266 "base_bdevs_list": [ 00:12:09.266 { 00:12:09.266 "name": "BaseBdev1", 00:12:09.266 "uuid": "c5899dce-0f60-59de-842c-d98531228ac1", 00:12:09.266 "is_configured": true, 00:12:09.266 "data_offset": 2048, 00:12:09.266 "data_size": 63488 00:12:09.266 }, 00:12:09.266 { 00:12:09.266 "name": "BaseBdev2", 00:12:09.266 "uuid": "2264b471-1cc7-5aa8-b307-6b3879e843be", 00:12:09.266 "is_configured": true, 00:12:09.266 "data_offset": 2048, 00:12:09.266 "data_size": 63488 00:12:09.266 }, 00:12:09.266 { 00:12:09.266 "name": "BaseBdev3", 00:12:09.266 "uuid": "9c288941-3ee3-51e0-b03d-b0868de835a2", 00:12:09.266 "is_configured": true, 00:12:09.266 "data_offset": 2048, 00:12:09.266 "data_size": 63488 00:12:09.266 }, 00:12:09.266 { 00:12:09.266 "name": "BaseBdev4", 00:12:09.266 "uuid": "740a5f7c-9aa9-5103-a6ce-9576592123f2", 00:12:09.266 "is_configured": true, 00:12:09.266 "data_offset": 2048, 00:12:09.266 "data_size": 63488 00:12:09.266 } 00:12:09.266 ] 00:12:09.266 }' 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.266 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.525 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:09.525 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.784 [2024-12-13 08:23:21.893312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.784 [2024-12-13 08:23:21.893347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.784 [2024-12-13 08:23:21.896091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.784 [2024-12-13 08:23:21.896161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.784 [2024-12-13 08:23:21.896205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.784 [2024-12-13 08:23:21.896217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:09.784 { 00:12:09.784 "results": [ 00:12:09.784 { 00:12:09.784 "job": "raid_bdev1", 00:12:09.784 "core_mask": "0x1", 00:12:09.784 "workload": "randrw", 00:12:09.784 "percentage": 50, 00:12:09.784 "status": "finished", 00:12:09.784 "queue_depth": 1, 00:12:09.784 "io_size": 131072, 00:12:09.784 "runtime": 1.401098, 00:12:09.784 "iops": 14349.460209064606, 00:12:09.784 "mibps": 1793.6825261330757, 00:12:09.784 "io_failed": 1, 00:12:09.784 "io_timeout": 0, 00:12:09.784 "avg_latency_us": 96.49212014749773, 00:12:09.784 "min_latency_us": 27.388646288209607, 00:12:09.784 "max_latency_us": 1445.2262008733624 00:12:09.784 } 00:12:09.784 ], 00:12:09.784 "core_count": 1 00:12:09.784 } 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73218 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73218 ']' 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73218 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73218 00:12:09.784 killing process with pid 73218 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73218' 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73218 00:12:09.784 [2024-12-13 08:23:21.942729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.784 08:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73218 00:12:10.043 [2024-12-13 08:23:22.305224] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v67DMkXLCu 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:11.485 00:12:11.485 real 0m4.910s 00:12:11.485 user 0m5.840s 00:12:11.485 sys 0m0.603s 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.485 ************************************ 00:12:11.485 END TEST raid_write_error_test 00:12:11.485 ************************************ 00:12:11.485 08:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.485 08:23:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:11.485 08:23:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:11.485 08:23:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.485 08:23:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.485 08:23:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.485 ************************************ 00:12:11.485 START TEST raid_state_function_test 00:12:11.485 ************************************ 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73371 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73371' 00:12:11.485 Process raid pid: 73371 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73371 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73371 ']' 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.485 08:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.485 [2024-12-13 08:23:23.733264] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:11.485 [2024-12-13 08:23:23.733489] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.744 [2024-12-13 08:23:23.909329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.744 [2024-12-13 08:23:24.032270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.003 [2024-12-13 08:23:24.245001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.003 [2024-12-13 08:23:24.245127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 [2024-12-13 08:23:24.614154] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.261 [2024-12-13 08:23:24.614259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.261 [2024-12-13 08:23:24.614295] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.261 [2024-12-13 08:23:24.614319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.261 [2024-12-13 08:23:24.614339] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.261 [2024-12-13 08:23:24.614364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.261 [2024-12-13 08:23:24.614407] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.261 [2024-12-13 08:23:24.614436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.261 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.520 "name": "Existed_Raid", 00:12:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.520 "strip_size_kb": 0, 00:12:12.520 "state": "configuring", 00:12:12.520 "raid_level": "raid1", 00:12:12.520 "superblock": false, 00:12:12.520 "num_base_bdevs": 4, 00:12:12.520 "num_base_bdevs_discovered": 0, 00:12:12.520 "num_base_bdevs_operational": 4, 00:12:12.520 "base_bdevs_list": [ 00:12:12.520 { 00:12:12.520 "name": "BaseBdev1", 00:12:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.520 "is_configured": false, 00:12:12.520 "data_offset": 0, 00:12:12.520 "data_size": 0 00:12:12.520 }, 00:12:12.520 { 00:12:12.520 "name": "BaseBdev2", 00:12:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.520 "is_configured": false, 00:12:12.520 "data_offset": 0, 00:12:12.520 "data_size": 0 00:12:12.520 }, 00:12:12.520 { 00:12:12.520 "name": "BaseBdev3", 00:12:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.520 "is_configured": false, 00:12:12.520 "data_offset": 0, 00:12:12.520 "data_size": 0 00:12:12.520 }, 00:12:12.520 { 00:12:12.520 "name": "BaseBdev4", 00:12:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.520 "is_configured": false, 00:12:12.520 "data_offset": 0, 00:12:12.520 "data_size": 0 00:12:12.520 } 00:12:12.520 ] 00:12:12.520 }' 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.520 08:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.779 [2024-12-13 08:23:25.073326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.779 [2024-12-13 08:23:25.073368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.779 [2024-12-13 08:23:25.081301] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.779 [2024-12-13 08:23:25.081416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.779 [2024-12-13 08:23:25.081450] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.779 [2024-12-13 08:23:25.081478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.779 [2024-12-13 08:23:25.081502] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.779 [2024-12-13 08:23:25.081573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.779 [2024-12-13 08:23:25.081609] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.779 [2024-12-13 08:23:25.081640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.779 [2024-12-13 08:23:25.129300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.779 BaseBdev1 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.779 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.038 [ 00:12:13.038 { 00:12:13.038 "name": "BaseBdev1", 00:12:13.038 "aliases": [ 00:12:13.038 "0019a355-d2e6-45e9-a4a5-7c6c832c8875" 00:12:13.038 ], 00:12:13.038 "product_name": "Malloc disk", 00:12:13.038 "block_size": 512, 00:12:13.038 "num_blocks": 65536, 00:12:13.038 "uuid": "0019a355-d2e6-45e9-a4a5-7c6c832c8875", 00:12:13.038 "assigned_rate_limits": { 00:12:13.038 "rw_ios_per_sec": 0, 00:12:13.038 "rw_mbytes_per_sec": 0, 00:12:13.038 "r_mbytes_per_sec": 0, 00:12:13.038 "w_mbytes_per_sec": 0 00:12:13.038 }, 00:12:13.038 "claimed": true, 00:12:13.038 "claim_type": "exclusive_write", 00:12:13.038 "zoned": false, 00:12:13.038 "supported_io_types": { 00:12:13.038 "read": true, 00:12:13.038 "write": true, 00:12:13.038 "unmap": true, 00:12:13.038 "flush": true, 00:12:13.038 "reset": true, 00:12:13.038 "nvme_admin": false, 00:12:13.038 "nvme_io": false, 00:12:13.038 "nvme_io_md": false, 00:12:13.038 "write_zeroes": true, 00:12:13.038 "zcopy": true, 00:12:13.038 "get_zone_info": false, 00:12:13.038 "zone_management": false, 00:12:13.038 "zone_append": false, 00:12:13.038 "compare": false, 00:12:13.038 "compare_and_write": false, 00:12:13.038 "abort": true, 00:12:13.038 "seek_hole": false, 00:12:13.038 "seek_data": false, 00:12:13.038 "copy": true, 00:12:13.038 "nvme_iov_md": false 00:12:13.038 }, 00:12:13.038 "memory_domains": [ 00:12:13.038 { 00:12:13.038 "dma_device_id": "system", 00:12:13.038 "dma_device_type": 1 00:12:13.038 }, 00:12:13.038 { 00:12:13.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.038 "dma_device_type": 2 00:12:13.038 } 00:12:13.038 ], 00:12:13.038 "driver_specific": {} 00:12:13.038 } 00:12:13.038 ] 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.038 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.039 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.039 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.039 "name": "Existed_Raid", 00:12:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.039 "strip_size_kb": 0, 00:12:13.039 "state": "configuring", 00:12:13.039 "raid_level": "raid1", 00:12:13.039 "superblock": false, 00:12:13.039 "num_base_bdevs": 4, 00:12:13.039 "num_base_bdevs_discovered": 1, 00:12:13.039 "num_base_bdevs_operational": 4, 00:12:13.039 "base_bdevs_list": [ 00:12:13.039 { 00:12:13.039 "name": "BaseBdev1", 00:12:13.039 "uuid": "0019a355-d2e6-45e9-a4a5-7c6c832c8875", 00:12:13.039 "is_configured": true, 00:12:13.039 "data_offset": 0, 00:12:13.039 "data_size": 65536 00:12:13.039 }, 00:12:13.039 { 00:12:13.039 "name": "BaseBdev2", 00:12:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.039 "is_configured": false, 00:12:13.039 "data_offset": 0, 00:12:13.039 "data_size": 0 00:12:13.039 }, 00:12:13.039 { 00:12:13.039 "name": "BaseBdev3", 00:12:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.039 "is_configured": false, 00:12:13.039 "data_offset": 0, 00:12:13.039 "data_size": 0 00:12:13.039 }, 00:12:13.039 { 00:12:13.039 "name": "BaseBdev4", 00:12:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.039 "is_configured": false, 00:12:13.039 "data_offset": 0, 00:12:13.039 "data_size": 0 00:12:13.039 } 00:12:13.039 ] 00:12:13.039 }' 00:12:13.039 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.039 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.297 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.297 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.297 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.297 [2024-12-13 08:23:25.600568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.297 [2024-12-13 08:23:25.600694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 [2024-12-13 08:23:25.608596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.298 [2024-12-13 08:23:25.610406] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.298 [2024-12-13 08:23:25.610444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.298 [2024-12-13 08:23:25.610454] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.298 [2024-12-13 08:23:25.610466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.298 [2024-12-13 08:23:25.610473] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.298 [2024-12-13 08:23:25.610482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.556 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.556 "name": "Existed_Raid", 00:12:13.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.556 "strip_size_kb": 0, 00:12:13.556 "state": "configuring", 00:12:13.556 "raid_level": "raid1", 00:12:13.556 "superblock": false, 00:12:13.556 "num_base_bdevs": 4, 00:12:13.556 "num_base_bdevs_discovered": 1, 00:12:13.556 "num_base_bdevs_operational": 4, 00:12:13.556 "base_bdevs_list": [ 00:12:13.556 { 00:12:13.556 "name": "BaseBdev1", 00:12:13.556 "uuid": "0019a355-d2e6-45e9-a4a5-7c6c832c8875", 00:12:13.556 "is_configured": true, 00:12:13.556 "data_offset": 0, 00:12:13.556 "data_size": 65536 00:12:13.556 }, 00:12:13.556 { 00:12:13.556 "name": "BaseBdev2", 00:12:13.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.556 "is_configured": false, 00:12:13.556 "data_offset": 0, 00:12:13.556 "data_size": 0 00:12:13.556 }, 00:12:13.556 { 00:12:13.556 "name": "BaseBdev3", 00:12:13.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.556 "is_configured": false, 00:12:13.556 "data_offset": 0, 00:12:13.556 "data_size": 0 00:12:13.556 }, 00:12:13.556 { 00:12:13.556 "name": "BaseBdev4", 00:12:13.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.556 "is_configured": false, 00:12:13.556 "data_offset": 0, 00:12:13.556 "data_size": 0 00:12:13.556 } 00:12:13.556 ] 00:12:13.556 }' 00:12:13.556 08:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.556 08:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.815 [2024-12-13 08:23:26.070261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.815 BaseBdev2 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.815 [ 00:12:13.815 { 00:12:13.815 "name": "BaseBdev2", 00:12:13.815 "aliases": [ 00:12:13.815 "114e7a83-44c1-4662-a4e6-dbe31f72eace" 00:12:13.815 ], 00:12:13.815 "product_name": "Malloc disk", 00:12:13.815 "block_size": 512, 00:12:13.815 "num_blocks": 65536, 00:12:13.815 "uuid": "114e7a83-44c1-4662-a4e6-dbe31f72eace", 00:12:13.815 "assigned_rate_limits": { 00:12:13.815 "rw_ios_per_sec": 0, 00:12:13.815 "rw_mbytes_per_sec": 0, 00:12:13.815 "r_mbytes_per_sec": 0, 00:12:13.815 "w_mbytes_per_sec": 0 00:12:13.815 }, 00:12:13.815 "claimed": true, 00:12:13.815 "claim_type": "exclusive_write", 00:12:13.815 "zoned": false, 00:12:13.815 "supported_io_types": { 00:12:13.815 "read": true, 00:12:13.815 "write": true, 00:12:13.815 "unmap": true, 00:12:13.815 "flush": true, 00:12:13.815 "reset": true, 00:12:13.815 "nvme_admin": false, 00:12:13.815 "nvme_io": false, 00:12:13.815 "nvme_io_md": false, 00:12:13.815 "write_zeroes": true, 00:12:13.815 "zcopy": true, 00:12:13.815 "get_zone_info": false, 00:12:13.815 "zone_management": false, 00:12:13.815 "zone_append": false, 00:12:13.815 "compare": false, 00:12:13.815 "compare_and_write": false, 00:12:13.815 "abort": true, 00:12:13.815 "seek_hole": false, 00:12:13.815 "seek_data": false, 00:12:13.815 "copy": true, 00:12:13.815 "nvme_iov_md": false 00:12:13.815 }, 00:12:13.815 "memory_domains": [ 00:12:13.815 { 00:12:13.815 "dma_device_id": "system", 00:12:13.815 "dma_device_type": 1 00:12:13.815 }, 00:12:13.815 { 00:12:13.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.815 "dma_device_type": 2 00:12:13.815 } 00:12:13.815 ], 00:12:13.815 "driver_specific": {} 00:12:13.815 } 00:12:13.815 ] 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.815 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.815 "name": "Existed_Raid", 00:12:13.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.815 "strip_size_kb": 0, 00:12:13.815 "state": "configuring", 00:12:13.815 "raid_level": "raid1", 00:12:13.815 "superblock": false, 00:12:13.815 "num_base_bdevs": 4, 00:12:13.815 "num_base_bdevs_discovered": 2, 00:12:13.815 "num_base_bdevs_operational": 4, 00:12:13.815 "base_bdevs_list": [ 00:12:13.815 { 00:12:13.815 "name": "BaseBdev1", 00:12:13.816 "uuid": "0019a355-d2e6-45e9-a4a5-7c6c832c8875", 00:12:13.816 "is_configured": true, 00:12:13.816 "data_offset": 0, 00:12:13.816 "data_size": 65536 00:12:13.816 }, 00:12:13.816 { 00:12:13.816 "name": "BaseBdev2", 00:12:13.816 "uuid": "114e7a83-44c1-4662-a4e6-dbe31f72eace", 00:12:13.816 "is_configured": true, 00:12:13.816 "data_offset": 0, 00:12:13.816 "data_size": 65536 00:12:13.816 }, 00:12:13.816 { 00:12:13.816 "name": "BaseBdev3", 00:12:13.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.816 "is_configured": false, 00:12:13.816 "data_offset": 0, 00:12:13.816 "data_size": 0 00:12:13.816 }, 00:12:13.816 { 00:12:13.816 "name": "BaseBdev4", 00:12:13.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.816 "is_configured": false, 00:12:13.816 "data_offset": 0, 00:12:13.816 "data_size": 0 00:12:13.816 } 00:12:13.816 ] 00:12:13.816 }' 00:12:13.816 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.816 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.384 [2024-12-13 08:23:26.620521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.384 BaseBdev3 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.384 [ 00:12:14.384 { 00:12:14.384 "name": "BaseBdev3", 00:12:14.384 "aliases": [ 00:12:14.384 "f2ae6d90-4307-4c32-b325-a59a0e811ca9" 00:12:14.384 ], 00:12:14.384 "product_name": "Malloc disk", 00:12:14.384 "block_size": 512, 00:12:14.384 "num_blocks": 65536, 00:12:14.384 "uuid": "f2ae6d90-4307-4c32-b325-a59a0e811ca9", 00:12:14.384 "assigned_rate_limits": { 00:12:14.384 "rw_ios_per_sec": 0, 00:12:14.384 "rw_mbytes_per_sec": 0, 00:12:14.384 "r_mbytes_per_sec": 0, 00:12:14.384 "w_mbytes_per_sec": 0 00:12:14.384 }, 00:12:14.384 "claimed": true, 00:12:14.384 "claim_type": "exclusive_write", 00:12:14.384 "zoned": false, 00:12:14.384 "supported_io_types": { 00:12:14.384 "read": true, 00:12:14.384 "write": true, 00:12:14.384 "unmap": true, 00:12:14.384 "flush": true, 00:12:14.384 "reset": true, 00:12:14.384 "nvme_admin": false, 00:12:14.384 "nvme_io": false, 00:12:14.384 "nvme_io_md": false, 00:12:14.384 "write_zeroes": true, 00:12:14.384 "zcopy": true, 00:12:14.384 "get_zone_info": false, 00:12:14.384 "zone_management": false, 00:12:14.384 "zone_append": false, 00:12:14.384 "compare": false, 00:12:14.384 "compare_and_write": false, 00:12:14.384 "abort": true, 00:12:14.384 "seek_hole": false, 00:12:14.384 "seek_data": false, 00:12:14.384 "copy": true, 00:12:14.384 "nvme_iov_md": false 00:12:14.384 }, 00:12:14.384 "memory_domains": [ 00:12:14.384 { 00:12:14.384 "dma_device_id": "system", 00:12:14.384 "dma_device_type": 1 00:12:14.384 }, 00:12:14.384 { 00:12:14.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.384 "dma_device_type": 2 00:12:14.384 } 00:12:14.384 ], 00:12:14.384 "driver_specific": {} 00:12:14.384 } 00:12:14.384 ] 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.384 "name": "Existed_Raid", 00:12:14.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.384 "strip_size_kb": 0, 00:12:14.384 "state": "configuring", 00:12:14.384 "raid_level": "raid1", 00:12:14.384 "superblock": false, 00:12:14.384 "num_base_bdevs": 4, 00:12:14.384 "num_base_bdevs_discovered": 3, 00:12:14.384 "num_base_bdevs_operational": 4, 00:12:14.384 "base_bdevs_list": [ 00:12:14.384 { 00:12:14.384 "name": "BaseBdev1", 00:12:14.384 "uuid": "0019a355-d2e6-45e9-a4a5-7c6c832c8875", 00:12:14.384 "is_configured": true, 00:12:14.384 "data_offset": 0, 00:12:14.384 "data_size": 65536 00:12:14.384 }, 00:12:14.384 { 00:12:14.384 "name": "BaseBdev2", 00:12:14.384 "uuid": "114e7a83-44c1-4662-a4e6-dbe31f72eace", 00:12:14.384 "is_configured": true, 00:12:14.384 "data_offset": 0, 00:12:14.384 "data_size": 65536 00:12:14.384 }, 00:12:14.384 { 00:12:14.384 "name": "BaseBdev3", 00:12:14.384 "uuid": "f2ae6d90-4307-4c32-b325-a59a0e811ca9", 00:12:14.384 "is_configured": true, 00:12:14.384 "data_offset": 0, 00:12:14.384 "data_size": 65536 00:12:14.384 }, 00:12:14.384 { 00:12:14.384 "name": "BaseBdev4", 00:12:14.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.384 "is_configured": false, 00:12:14.384 "data_offset": 0, 00:12:14.384 "data_size": 0 00:12:14.384 } 00:12:14.384 ] 00:12:14.384 }' 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.384 08:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.952 [2024-12-13 08:23:27.158644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:14.952 [2024-12-13 08:23:27.158705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.952 [2024-12-13 08:23:27.158714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:14.952 [2024-12-13 08:23:27.159013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:14.952 [2024-12-13 08:23:27.159255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.952 [2024-12-13 08:23:27.159273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:14.952 [2024-12-13 08:23:27.159563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.952 BaseBdev4 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.952 [ 00:12:14.952 { 00:12:14.952 "name": "BaseBdev4", 00:12:14.952 "aliases": [ 00:12:14.952 "2c5ffb6e-c66c-45fc-9c9c-c07e0be46a3b" 00:12:14.952 ], 00:12:14.952 "product_name": "Malloc disk", 00:12:14.952 "block_size": 512, 00:12:14.952 "num_blocks": 65536, 00:12:14.952 "uuid": "2c5ffb6e-c66c-45fc-9c9c-c07e0be46a3b", 00:12:14.952 "assigned_rate_limits": { 00:12:14.952 "rw_ios_per_sec": 0, 00:12:14.952 "rw_mbytes_per_sec": 0, 00:12:14.952 "r_mbytes_per_sec": 0, 00:12:14.952 "w_mbytes_per_sec": 0 00:12:14.952 }, 00:12:14.952 "claimed": true, 00:12:14.952 "claim_type": "exclusive_write", 00:12:14.952 "zoned": false, 00:12:14.952 "supported_io_types": { 00:12:14.952 "read": true, 00:12:14.952 "write": true, 00:12:14.952 "unmap": true, 00:12:14.952 "flush": true, 00:12:14.952 "reset": true, 00:12:14.952 "nvme_admin": false, 00:12:14.952 "nvme_io": false, 00:12:14.952 "nvme_io_md": false, 00:12:14.952 "write_zeroes": true, 00:12:14.952 "zcopy": true, 00:12:14.952 "get_zone_info": false, 00:12:14.952 "zone_management": false, 00:12:14.952 "zone_append": false, 00:12:14.952 "compare": false, 00:12:14.952 "compare_and_write": false, 00:12:14.952 "abort": true, 00:12:14.952 "seek_hole": false, 00:12:14.952 "seek_data": false, 00:12:14.952 "copy": true, 00:12:14.952 "nvme_iov_md": false 00:12:14.952 }, 00:12:14.952 "memory_domains": [ 00:12:14.952 { 00:12:14.952 "dma_device_id": "system", 00:12:14.952 "dma_device_type": 1 00:12:14.952 }, 00:12:14.952 { 00:12:14.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.952 "dma_device_type": 2 00:12:14.952 } 00:12:14.952 ], 00:12:14.952 "driver_specific": {} 00:12:14.952 } 00:12:14.952 ] 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.952 "name": "Existed_Raid", 00:12:14.952 "uuid": "7b062390-fe15-4fb6-bb8f-a18ca4b647bc", 00:12:14.952 "strip_size_kb": 0, 00:12:14.952 "state": "online", 00:12:14.952 "raid_level": "raid1", 00:12:14.952 "superblock": false, 00:12:14.952 "num_base_bdevs": 4, 00:12:14.952 "num_base_bdevs_discovered": 4, 00:12:14.952 "num_base_bdevs_operational": 4, 00:12:14.952 "base_bdevs_list": [ 00:12:14.952 { 00:12:14.952 "name": "BaseBdev1", 00:12:14.952 "uuid": "0019a355-d2e6-45e9-a4a5-7c6c832c8875", 00:12:14.952 "is_configured": true, 00:12:14.952 "data_offset": 0, 00:12:14.952 "data_size": 65536 00:12:14.952 }, 00:12:14.952 { 00:12:14.952 "name": "BaseBdev2", 00:12:14.952 "uuid": "114e7a83-44c1-4662-a4e6-dbe31f72eace", 00:12:14.952 "is_configured": true, 00:12:14.952 "data_offset": 0, 00:12:14.952 "data_size": 65536 00:12:14.952 }, 00:12:14.952 { 00:12:14.952 "name": "BaseBdev3", 00:12:14.952 "uuid": "f2ae6d90-4307-4c32-b325-a59a0e811ca9", 00:12:14.952 "is_configured": true, 00:12:14.952 "data_offset": 0, 00:12:14.952 "data_size": 65536 00:12:14.952 }, 00:12:14.952 { 00:12:14.952 "name": "BaseBdev4", 00:12:14.952 "uuid": "2c5ffb6e-c66c-45fc-9c9c-c07e0be46a3b", 00:12:14.952 "is_configured": true, 00:12:14.952 "data_offset": 0, 00:12:14.952 "data_size": 65536 00:12:14.952 } 00:12:14.952 ] 00:12:14.952 }' 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.952 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.520 [2024-12-13 08:23:27.654285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.520 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.520 "name": "Existed_Raid", 00:12:15.520 "aliases": [ 00:12:15.520 "7b062390-fe15-4fb6-bb8f-a18ca4b647bc" 00:12:15.520 ], 00:12:15.520 "product_name": "Raid Volume", 00:12:15.520 "block_size": 512, 00:12:15.520 "num_blocks": 65536, 00:12:15.520 "uuid": "7b062390-fe15-4fb6-bb8f-a18ca4b647bc", 00:12:15.520 "assigned_rate_limits": { 00:12:15.520 "rw_ios_per_sec": 0, 00:12:15.520 "rw_mbytes_per_sec": 0, 00:12:15.520 "r_mbytes_per_sec": 0, 00:12:15.520 "w_mbytes_per_sec": 0 00:12:15.520 }, 00:12:15.520 "claimed": false, 00:12:15.520 "zoned": false, 00:12:15.520 "supported_io_types": { 00:12:15.520 "read": true, 00:12:15.520 "write": true, 00:12:15.520 "unmap": false, 00:12:15.520 "flush": false, 00:12:15.520 "reset": true, 00:12:15.520 "nvme_admin": false, 00:12:15.520 "nvme_io": false, 00:12:15.520 "nvme_io_md": false, 00:12:15.520 "write_zeroes": true, 00:12:15.520 "zcopy": false, 00:12:15.520 "get_zone_info": false, 00:12:15.520 "zone_management": false, 00:12:15.520 "zone_append": false, 00:12:15.520 "compare": false, 00:12:15.521 "compare_and_write": false, 00:12:15.521 "abort": false, 00:12:15.521 "seek_hole": false, 00:12:15.521 "seek_data": false, 00:12:15.521 "copy": false, 00:12:15.521 "nvme_iov_md": false 00:12:15.521 }, 00:12:15.521 "memory_domains": [ 00:12:15.521 { 00:12:15.521 "dma_device_id": "system", 00:12:15.521 "dma_device_type": 1 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.521 "dma_device_type": 2 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "dma_device_id": "system", 00:12:15.521 "dma_device_type": 1 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.521 "dma_device_type": 2 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "dma_device_id": "system", 00:12:15.521 "dma_device_type": 1 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.521 "dma_device_type": 2 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "dma_device_id": "system", 00:12:15.521 "dma_device_type": 1 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.521 "dma_device_type": 2 00:12:15.521 } 00:12:15.521 ], 00:12:15.521 "driver_specific": { 00:12:15.521 "raid": { 00:12:15.521 "uuid": "7b062390-fe15-4fb6-bb8f-a18ca4b647bc", 00:12:15.521 "strip_size_kb": 0, 00:12:15.521 "state": "online", 00:12:15.521 "raid_level": "raid1", 00:12:15.521 "superblock": false, 00:12:15.521 "num_base_bdevs": 4, 00:12:15.521 "num_base_bdevs_discovered": 4, 00:12:15.521 "num_base_bdevs_operational": 4, 00:12:15.521 "base_bdevs_list": [ 00:12:15.521 { 00:12:15.521 "name": "BaseBdev1", 00:12:15.521 "uuid": "0019a355-d2e6-45e9-a4a5-7c6c832c8875", 00:12:15.521 "is_configured": true, 00:12:15.521 "data_offset": 0, 00:12:15.521 "data_size": 65536 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "name": "BaseBdev2", 00:12:15.521 "uuid": "114e7a83-44c1-4662-a4e6-dbe31f72eace", 00:12:15.521 "is_configured": true, 00:12:15.521 "data_offset": 0, 00:12:15.521 "data_size": 65536 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "name": "BaseBdev3", 00:12:15.521 "uuid": "f2ae6d90-4307-4c32-b325-a59a0e811ca9", 00:12:15.521 "is_configured": true, 00:12:15.521 "data_offset": 0, 00:12:15.521 "data_size": 65536 00:12:15.521 }, 00:12:15.521 { 00:12:15.521 "name": "BaseBdev4", 00:12:15.521 "uuid": "2c5ffb6e-c66c-45fc-9c9c-c07e0be46a3b", 00:12:15.521 "is_configured": true, 00:12:15.521 "data_offset": 0, 00:12:15.521 "data_size": 65536 00:12:15.521 } 00:12:15.521 ] 00:12:15.521 } 00:12:15.521 } 00:12:15.521 }' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:15.521 BaseBdev2 00:12:15.521 BaseBdev3 00:12:15.521 BaseBdev4' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.521 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.782 08:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.782 [2024-12-13 08:23:27.977393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.782 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.783 "name": "Existed_Raid", 00:12:15.783 "uuid": "7b062390-fe15-4fb6-bb8f-a18ca4b647bc", 00:12:15.783 "strip_size_kb": 0, 00:12:15.783 "state": "online", 00:12:15.783 "raid_level": "raid1", 00:12:15.783 "superblock": false, 00:12:15.783 "num_base_bdevs": 4, 00:12:15.783 "num_base_bdevs_discovered": 3, 00:12:15.783 "num_base_bdevs_operational": 3, 00:12:15.783 "base_bdevs_list": [ 00:12:15.783 { 00:12:15.783 "name": null, 00:12:15.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.783 "is_configured": false, 00:12:15.783 "data_offset": 0, 00:12:15.783 "data_size": 65536 00:12:15.783 }, 00:12:15.783 { 00:12:15.783 "name": "BaseBdev2", 00:12:15.783 "uuid": "114e7a83-44c1-4662-a4e6-dbe31f72eace", 00:12:15.783 "is_configured": true, 00:12:15.783 "data_offset": 0, 00:12:15.783 "data_size": 65536 00:12:15.783 }, 00:12:15.783 { 00:12:15.783 "name": "BaseBdev3", 00:12:15.783 "uuid": "f2ae6d90-4307-4c32-b325-a59a0e811ca9", 00:12:15.783 "is_configured": true, 00:12:15.783 "data_offset": 0, 00:12:15.783 "data_size": 65536 00:12:15.783 }, 00:12:15.783 { 00:12:15.783 "name": "BaseBdev4", 00:12:15.783 "uuid": "2c5ffb6e-c66c-45fc-9c9c-c07e0be46a3b", 00:12:15.783 "is_configured": true, 00:12:15.783 "data_offset": 0, 00:12:15.783 "data_size": 65536 00:12:15.783 } 00:12:15.783 ] 00:12:15.783 }' 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.783 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.353 [2024-12-13 08:23:28.589595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.353 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.612 [2024-12-13 08:23:28.743245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.612 08:23:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.612 [2024-12-13 08:23:28.913083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:16.612 [2024-12-13 08:23:28.913256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.871 [2024-12-13 08:23:29.012363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.871 [2024-12-13 08:23:29.012505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.871 [2024-12-13 08:23:29.012550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.871 BaseBdev2 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.871 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.872 [ 00:12:16.872 { 00:12:16.872 "name": "BaseBdev2", 00:12:16.872 "aliases": [ 00:12:16.872 "07596e39-fb3d-4655-af94-dcb61db1fcd3" 00:12:16.872 ], 00:12:16.872 "product_name": "Malloc disk", 00:12:16.872 "block_size": 512, 00:12:16.872 "num_blocks": 65536, 00:12:16.872 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:16.872 "assigned_rate_limits": { 00:12:16.872 "rw_ios_per_sec": 0, 00:12:16.872 "rw_mbytes_per_sec": 0, 00:12:16.872 "r_mbytes_per_sec": 0, 00:12:16.872 "w_mbytes_per_sec": 0 00:12:16.872 }, 00:12:16.872 "claimed": false, 00:12:16.872 "zoned": false, 00:12:16.872 "supported_io_types": { 00:12:16.872 "read": true, 00:12:16.872 "write": true, 00:12:16.872 "unmap": true, 00:12:16.872 "flush": true, 00:12:16.872 "reset": true, 00:12:16.872 "nvme_admin": false, 00:12:16.872 "nvme_io": false, 00:12:16.872 "nvme_io_md": false, 00:12:16.872 "write_zeroes": true, 00:12:16.872 "zcopy": true, 00:12:16.872 "get_zone_info": false, 00:12:16.872 "zone_management": false, 00:12:16.872 "zone_append": false, 00:12:16.872 "compare": false, 00:12:16.872 "compare_and_write": false, 00:12:16.872 "abort": true, 00:12:16.872 "seek_hole": false, 00:12:16.872 "seek_data": false, 00:12:16.872 "copy": true, 00:12:16.872 "nvme_iov_md": false 00:12:16.872 }, 00:12:16.872 "memory_domains": [ 00:12:16.872 { 00:12:16.872 "dma_device_id": "system", 00:12:16.872 "dma_device_type": 1 00:12:16.872 }, 00:12:16.872 { 00:12:16.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.872 "dma_device_type": 2 00:12:16.872 } 00:12:16.872 ], 00:12:16.872 "driver_specific": {} 00:12:16.872 } 00:12:16.872 ] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.872 BaseBdev3 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.872 [ 00:12:16.872 { 00:12:16.872 "name": "BaseBdev3", 00:12:16.872 "aliases": [ 00:12:16.872 "be861e84-ff8a-454f-8408-b84d97717520" 00:12:16.872 ], 00:12:16.872 "product_name": "Malloc disk", 00:12:16.872 "block_size": 512, 00:12:16.872 "num_blocks": 65536, 00:12:16.872 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:16.872 "assigned_rate_limits": { 00:12:16.872 "rw_ios_per_sec": 0, 00:12:16.872 "rw_mbytes_per_sec": 0, 00:12:16.872 "r_mbytes_per_sec": 0, 00:12:16.872 "w_mbytes_per_sec": 0 00:12:16.872 }, 00:12:16.872 "claimed": false, 00:12:16.872 "zoned": false, 00:12:16.872 "supported_io_types": { 00:12:16.872 "read": true, 00:12:16.872 "write": true, 00:12:16.872 "unmap": true, 00:12:16.872 "flush": true, 00:12:16.872 "reset": true, 00:12:16.872 "nvme_admin": false, 00:12:16.872 "nvme_io": false, 00:12:16.872 "nvme_io_md": false, 00:12:16.872 "write_zeroes": true, 00:12:16.872 "zcopy": true, 00:12:16.872 "get_zone_info": false, 00:12:16.872 "zone_management": false, 00:12:16.872 "zone_append": false, 00:12:16.872 "compare": false, 00:12:16.872 "compare_and_write": false, 00:12:16.872 "abort": true, 00:12:16.872 "seek_hole": false, 00:12:16.872 "seek_data": false, 00:12:16.872 "copy": true, 00:12:16.872 "nvme_iov_md": false 00:12:16.872 }, 00:12:16.872 "memory_domains": [ 00:12:16.872 { 00:12:16.872 "dma_device_id": "system", 00:12:16.872 "dma_device_type": 1 00:12:16.872 }, 00:12:16.872 { 00:12:16.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.872 "dma_device_type": 2 00:12:16.872 } 00:12:16.872 ], 00:12:16.872 "driver_specific": {} 00:12:16.872 } 00:12:16.872 ] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.872 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.132 BaseBdev4 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.132 [ 00:12:17.132 { 00:12:17.132 "name": "BaseBdev4", 00:12:17.132 "aliases": [ 00:12:17.132 "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb" 00:12:17.132 ], 00:12:17.132 "product_name": "Malloc disk", 00:12:17.132 "block_size": 512, 00:12:17.132 "num_blocks": 65536, 00:12:17.132 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:17.132 "assigned_rate_limits": { 00:12:17.132 "rw_ios_per_sec": 0, 00:12:17.132 "rw_mbytes_per_sec": 0, 00:12:17.132 "r_mbytes_per_sec": 0, 00:12:17.132 "w_mbytes_per_sec": 0 00:12:17.132 }, 00:12:17.132 "claimed": false, 00:12:17.132 "zoned": false, 00:12:17.132 "supported_io_types": { 00:12:17.132 "read": true, 00:12:17.132 "write": true, 00:12:17.132 "unmap": true, 00:12:17.132 "flush": true, 00:12:17.132 "reset": true, 00:12:17.132 "nvme_admin": false, 00:12:17.132 "nvme_io": false, 00:12:17.132 "nvme_io_md": false, 00:12:17.132 "write_zeroes": true, 00:12:17.132 "zcopy": true, 00:12:17.132 "get_zone_info": false, 00:12:17.132 "zone_management": false, 00:12:17.132 "zone_append": false, 00:12:17.132 "compare": false, 00:12:17.132 "compare_and_write": false, 00:12:17.132 "abort": true, 00:12:17.132 "seek_hole": false, 00:12:17.132 "seek_data": false, 00:12:17.132 "copy": true, 00:12:17.132 "nvme_iov_md": false 00:12:17.132 }, 00:12:17.132 "memory_domains": [ 00:12:17.132 { 00:12:17.132 "dma_device_id": "system", 00:12:17.132 "dma_device_type": 1 00:12:17.132 }, 00:12:17.132 { 00:12:17.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.132 "dma_device_type": 2 00:12:17.132 } 00:12:17.132 ], 00:12:17.132 "driver_specific": {} 00:12:17.132 } 00:12:17.132 ] 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.132 [2024-12-13 08:23:29.294433] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.132 [2024-12-13 08:23:29.294557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.132 [2024-12-13 08:23:29.294610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.132 [2024-12-13 08:23:29.296693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.132 [2024-12-13 08:23:29.296823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.132 "name": "Existed_Raid", 00:12:17.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.132 "strip_size_kb": 0, 00:12:17.132 "state": "configuring", 00:12:17.132 "raid_level": "raid1", 00:12:17.132 "superblock": false, 00:12:17.132 "num_base_bdevs": 4, 00:12:17.132 "num_base_bdevs_discovered": 3, 00:12:17.132 "num_base_bdevs_operational": 4, 00:12:17.132 "base_bdevs_list": [ 00:12:17.132 { 00:12:17.132 "name": "BaseBdev1", 00:12:17.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.132 "is_configured": false, 00:12:17.132 "data_offset": 0, 00:12:17.132 "data_size": 0 00:12:17.132 }, 00:12:17.132 { 00:12:17.132 "name": "BaseBdev2", 00:12:17.132 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:17.132 "is_configured": true, 00:12:17.132 "data_offset": 0, 00:12:17.132 "data_size": 65536 00:12:17.132 }, 00:12:17.132 { 00:12:17.132 "name": "BaseBdev3", 00:12:17.132 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:17.132 "is_configured": true, 00:12:17.132 "data_offset": 0, 00:12:17.132 "data_size": 65536 00:12:17.132 }, 00:12:17.132 { 00:12:17.132 "name": "BaseBdev4", 00:12:17.132 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:17.132 "is_configured": true, 00:12:17.132 "data_offset": 0, 00:12:17.132 "data_size": 65536 00:12:17.132 } 00:12:17.132 ] 00:12:17.132 }' 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.132 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.391 [2024-12-13 08:23:29.722593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.391 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.650 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.650 "name": "Existed_Raid", 00:12:17.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.650 "strip_size_kb": 0, 00:12:17.650 "state": "configuring", 00:12:17.650 "raid_level": "raid1", 00:12:17.650 "superblock": false, 00:12:17.651 "num_base_bdevs": 4, 00:12:17.651 "num_base_bdevs_discovered": 2, 00:12:17.651 "num_base_bdevs_operational": 4, 00:12:17.651 "base_bdevs_list": [ 00:12:17.651 { 00:12:17.651 "name": "BaseBdev1", 00:12:17.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.651 "is_configured": false, 00:12:17.651 "data_offset": 0, 00:12:17.651 "data_size": 0 00:12:17.651 }, 00:12:17.651 { 00:12:17.651 "name": null, 00:12:17.651 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:17.651 "is_configured": false, 00:12:17.651 "data_offset": 0, 00:12:17.651 "data_size": 65536 00:12:17.651 }, 00:12:17.651 { 00:12:17.651 "name": "BaseBdev3", 00:12:17.651 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:17.651 "is_configured": true, 00:12:17.651 "data_offset": 0, 00:12:17.651 "data_size": 65536 00:12:17.651 }, 00:12:17.651 { 00:12:17.651 "name": "BaseBdev4", 00:12:17.651 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:17.651 "is_configured": true, 00:12:17.651 "data_offset": 0, 00:12:17.651 "data_size": 65536 00:12:17.651 } 00:12:17.651 ] 00:12:17.651 }' 00:12:17.651 08:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.651 08:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.910 [2024-12-13 08:23:30.258716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.910 BaseBdev1 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.910 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.169 [ 00:12:18.169 { 00:12:18.169 "name": "BaseBdev1", 00:12:18.169 "aliases": [ 00:12:18.169 "e633d39b-400c-4f2f-aa61-a68077565ed8" 00:12:18.169 ], 00:12:18.169 "product_name": "Malloc disk", 00:12:18.169 "block_size": 512, 00:12:18.169 "num_blocks": 65536, 00:12:18.169 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:18.169 "assigned_rate_limits": { 00:12:18.169 "rw_ios_per_sec": 0, 00:12:18.169 "rw_mbytes_per_sec": 0, 00:12:18.169 "r_mbytes_per_sec": 0, 00:12:18.169 "w_mbytes_per_sec": 0 00:12:18.169 }, 00:12:18.169 "claimed": true, 00:12:18.169 "claim_type": "exclusive_write", 00:12:18.169 "zoned": false, 00:12:18.169 "supported_io_types": { 00:12:18.169 "read": true, 00:12:18.169 "write": true, 00:12:18.169 "unmap": true, 00:12:18.169 "flush": true, 00:12:18.169 "reset": true, 00:12:18.169 "nvme_admin": false, 00:12:18.169 "nvme_io": false, 00:12:18.169 "nvme_io_md": false, 00:12:18.169 "write_zeroes": true, 00:12:18.169 "zcopy": true, 00:12:18.169 "get_zone_info": false, 00:12:18.169 "zone_management": false, 00:12:18.169 "zone_append": false, 00:12:18.169 "compare": false, 00:12:18.169 "compare_and_write": false, 00:12:18.169 "abort": true, 00:12:18.169 "seek_hole": false, 00:12:18.169 "seek_data": false, 00:12:18.169 "copy": true, 00:12:18.169 "nvme_iov_md": false 00:12:18.169 }, 00:12:18.169 "memory_domains": [ 00:12:18.169 { 00:12:18.169 "dma_device_id": "system", 00:12:18.169 "dma_device_type": 1 00:12:18.169 }, 00:12:18.169 { 00:12:18.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.169 "dma_device_type": 2 00:12:18.169 } 00:12:18.169 ], 00:12:18.169 "driver_specific": {} 00:12:18.169 } 00:12:18.169 ] 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.169 "name": "Existed_Raid", 00:12:18.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.169 "strip_size_kb": 0, 00:12:18.169 "state": "configuring", 00:12:18.169 "raid_level": "raid1", 00:12:18.169 "superblock": false, 00:12:18.169 "num_base_bdevs": 4, 00:12:18.169 "num_base_bdevs_discovered": 3, 00:12:18.169 "num_base_bdevs_operational": 4, 00:12:18.169 "base_bdevs_list": [ 00:12:18.169 { 00:12:18.169 "name": "BaseBdev1", 00:12:18.169 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:18.169 "is_configured": true, 00:12:18.169 "data_offset": 0, 00:12:18.169 "data_size": 65536 00:12:18.169 }, 00:12:18.169 { 00:12:18.169 "name": null, 00:12:18.169 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:18.169 "is_configured": false, 00:12:18.169 "data_offset": 0, 00:12:18.169 "data_size": 65536 00:12:18.169 }, 00:12:18.169 { 00:12:18.169 "name": "BaseBdev3", 00:12:18.169 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:18.169 "is_configured": true, 00:12:18.169 "data_offset": 0, 00:12:18.169 "data_size": 65536 00:12:18.169 }, 00:12:18.169 { 00:12:18.169 "name": "BaseBdev4", 00:12:18.169 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:18.169 "is_configured": true, 00:12:18.169 "data_offset": 0, 00:12:18.169 "data_size": 65536 00:12:18.169 } 00:12:18.169 ] 00:12:18.169 }' 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.169 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.428 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.428 [2024-12-13 08:23:30.789911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.687 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.687 "name": "Existed_Raid", 00:12:18.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.687 "strip_size_kb": 0, 00:12:18.687 "state": "configuring", 00:12:18.687 "raid_level": "raid1", 00:12:18.687 "superblock": false, 00:12:18.687 "num_base_bdevs": 4, 00:12:18.687 "num_base_bdevs_discovered": 2, 00:12:18.687 "num_base_bdevs_operational": 4, 00:12:18.687 "base_bdevs_list": [ 00:12:18.687 { 00:12:18.687 "name": "BaseBdev1", 00:12:18.687 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:18.687 "is_configured": true, 00:12:18.687 "data_offset": 0, 00:12:18.687 "data_size": 65536 00:12:18.687 }, 00:12:18.687 { 00:12:18.687 "name": null, 00:12:18.687 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:18.687 "is_configured": false, 00:12:18.687 "data_offset": 0, 00:12:18.687 "data_size": 65536 00:12:18.687 }, 00:12:18.687 { 00:12:18.687 "name": null, 00:12:18.687 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:18.688 "is_configured": false, 00:12:18.688 "data_offset": 0, 00:12:18.688 "data_size": 65536 00:12:18.688 }, 00:12:18.688 { 00:12:18.688 "name": "BaseBdev4", 00:12:18.688 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:18.688 "is_configured": true, 00:12:18.688 "data_offset": 0, 00:12:18.688 "data_size": 65536 00:12:18.688 } 00:12:18.688 ] 00:12:18.688 }' 00:12:18.688 08:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.688 08:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.947 [2024-12-13 08:23:31.305026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.947 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.206 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.206 "name": "Existed_Raid", 00:12:19.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.206 "strip_size_kb": 0, 00:12:19.206 "state": "configuring", 00:12:19.206 "raid_level": "raid1", 00:12:19.206 "superblock": false, 00:12:19.206 "num_base_bdevs": 4, 00:12:19.206 "num_base_bdevs_discovered": 3, 00:12:19.206 "num_base_bdevs_operational": 4, 00:12:19.207 "base_bdevs_list": [ 00:12:19.207 { 00:12:19.207 "name": "BaseBdev1", 00:12:19.207 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:19.207 "is_configured": true, 00:12:19.207 "data_offset": 0, 00:12:19.207 "data_size": 65536 00:12:19.207 }, 00:12:19.207 { 00:12:19.207 "name": null, 00:12:19.207 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:19.207 "is_configured": false, 00:12:19.207 "data_offset": 0, 00:12:19.207 "data_size": 65536 00:12:19.207 }, 00:12:19.207 { 00:12:19.207 "name": "BaseBdev3", 00:12:19.207 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:19.207 "is_configured": true, 00:12:19.207 "data_offset": 0, 00:12:19.207 "data_size": 65536 00:12:19.207 }, 00:12:19.207 { 00:12:19.207 "name": "BaseBdev4", 00:12:19.207 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:19.207 "is_configured": true, 00:12:19.207 "data_offset": 0, 00:12:19.207 "data_size": 65536 00:12:19.207 } 00:12:19.207 ] 00:12:19.207 }' 00:12:19.207 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.207 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.466 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:19.466 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.466 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.466 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.466 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.726 [2024-12-13 08:23:31.852183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.726 08:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.726 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.726 "name": "Existed_Raid", 00:12:19.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.726 "strip_size_kb": 0, 00:12:19.726 "state": "configuring", 00:12:19.726 "raid_level": "raid1", 00:12:19.726 "superblock": false, 00:12:19.726 "num_base_bdevs": 4, 00:12:19.726 "num_base_bdevs_discovered": 2, 00:12:19.726 "num_base_bdevs_operational": 4, 00:12:19.726 "base_bdevs_list": [ 00:12:19.726 { 00:12:19.726 "name": null, 00:12:19.726 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:19.726 "is_configured": false, 00:12:19.726 "data_offset": 0, 00:12:19.726 "data_size": 65536 00:12:19.726 }, 00:12:19.726 { 00:12:19.726 "name": null, 00:12:19.726 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:19.726 "is_configured": false, 00:12:19.726 "data_offset": 0, 00:12:19.726 "data_size": 65536 00:12:19.726 }, 00:12:19.726 { 00:12:19.726 "name": "BaseBdev3", 00:12:19.726 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:19.726 "is_configured": true, 00:12:19.726 "data_offset": 0, 00:12:19.726 "data_size": 65536 00:12:19.726 }, 00:12:19.726 { 00:12:19.726 "name": "BaseBdev4", 00:12:19.726 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:19.726 "is_configured": true, 00:12:19.726 "data_offset": 0, 00:12:19.726 "data_size": 65536 00:12:19.726 } 00:12:19.726 ] 00:12:19.726 }' 00:12:19.726 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.726 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.294 [2024-12-13 08:23:32.494424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.294 "name": "Existed_Raid", 00:12:20.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.294 "strip_size_kb": 0, 00:12:20.294 "state": "configuring", 00:12:20.294 "raid_level": "raid1", 00:12:20.294 "superblock": false, 00:12:20.294 "num_base_bdevs": 4, 00:12:20.294 "num_base_bdevs_discovered": 3, 00:12:20.294 "num_base_bdevs_operational": 4, 00:12:20.294 "base_bdevs_list": [ 00:12:20.294 { 00:12:20.294 "name": null, 00:12:20.294 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:20.294 "is_configured": false, 00:12:20.294 "data_offset": 0, 00:12:20.294 "data_size": 65536 00:12:20.294 }, 00:12:20.294 { 00:12:20.294 "name": "BaseBdev2", 00:12:20.294 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:20.294 "is_configured": true, 00:12:20.294 "data_offset": 0, 00:12:20.294 "data_size": 65536 00:12:20.294 }, 00:12:20.294 { 00:12:20.294 "name": "BaseBdev3", 00:12:20.294 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:20.294 "is_configured": true, 00:12:20.294 "data_offset": 0, 00:12:20.294 "data_size": 65536 00:12:20.294 }, 00:12:20.294 { 00:12:20.294 "name": "BaseBdev4", 00:12:20.294 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:20.294 "is_configured": true, 00:12:20.294 "data_offset": 0, 00:12:20.294 "data_size": 65536 00:12:20.294 } 00:12:20.294 ] 00:12:20.294 }' 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.294 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.862 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:20.862 08:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.862 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.862 08:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e633d39b-400c-4f2f-aa61-a68077565ed8 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.862 [2024-12-13 08:23:33.102165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:20.862 [2024-12-13 08:23:33.102221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:20.862 [2024-12-13 08:23:33.102231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:20.862 [2024-12-13 08:23:33.102514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:20.862 [2024-12-13 08:23:33.102697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:20.862 [2024-12-13 08:23:33.102708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:20.862 [2024-12-13 08:23:33.103007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.862 NewBaseBdev 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.862 [ 00:12:20.862 { 00:12:20.862 "name": "NewBaseBdev", 00:12:20.862 "aliases": [ 00:12:20.862 "e633d39b-400c-4f2f-aa61-a68077565ed8" 00:12:20.862 ], 00:12:20.862 "product_name": "Malloc disk", 00:12:20.862 "block_size": 512, 00:12:20.862 "num_blocks": 65536, 00:12:20.862 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:20.862 "assigned_rate_limits": { 00:12:20.862 "rw_ios_per_sec": 0, 00:12:20.862 "rw_mbytes_per_sec": 0, 00:12:20.862 "r_mbytes_per_sec": 0, 00:12:20.862 "w_mbytes_per_sec": 0 00:12:20.862 }, 00:12:20.862 "claimed": true, 00:12:20.862 "claim_type": "exclusive_write", 00:12:20.862 "zoned": false, 00:12:20.862 "supported_io_types": { 00:12:20.862 "read": true, 00:12:20.862 "write": true, 00:12:20.862 "unmap": true, 00:12:20.862 "flush": true, 00:12:20.862 "reset": true, 00:12:20.862 "nvme_admin": false, 00:12:20.862 "nvme_io": false, 00:12:20.862 "nvme_io_md": false, 00:12:20.862 "write_zeroes": true, 00:12:20.862 "zcopy": true, 00:12:20.862 "get_zone_info": false, 00:12:20.862 "zone_management": false, 00:12:20.862 "zone_append": false, 00:12:20.862 "compare": false, 00:12:20.862 "compare_and_write": false, 00:12:20.862 "abort": true, 00:12:20.862 "seek_hole": false, 00:12:20.862 "seek_data": false, 00:12:20.862 "copy": true, 00:12:20.862 "nvme_iov_md": false 00:12:20.862 }, 00:12:20.862 "memory_domains": [ 00:12:20.862 { 00:12:20.862 "dma_device_id": "system", 00:12:20.862 "dma_device_type": 1 00:12:20.862 }, 00:12:20.862 { 00:12:20.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.862 "dma_device_type": 2 00:12:20.862 } 00:12:20.862 ], 00:12:20.862 "driver_specific": {} 00:12:20.862 } 00:12:20.862 ] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.862 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.863 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.863 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.863 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.863 "name": "Existed_Raid", 00:12:20.863 "uuid": "b39920f6-10e7-4e1f-a682-54214be8767c", 00:12:20.863 "strip_size_kb": 0, 00:12:20.863 "state": "online", 00:12:20.863 "raid_level": "raid1", 00:12:20.863 "superblock": false, 00:12:20.863 "num_base_bdevs": 4, 00:12:20.863 "num_base_bdevs_discovered": 4, 00:12:20.863 "num_base_bdevs_operational": 4, 00:12:20.863 "base_bdevs_list": [ 00:12:20.863 { 00:12:20.863 "name": "NewBaseBdev", 00:12:20.863 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:20.863 "is_configured": true, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 65536 00:12:20.863 }, 00:12:20.863 { 00:12:20.863 "name": "BaseBdev2", 00:12:20.863 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:20.863 "is_configured": true, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 65536 00:12:20.863 }, 00:12:20.863 { 00:12:20.863 "name": "BaseBdev3", 00:12:20.863 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:20.863 "is_configured": true, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 65536 00:12:20.863 }, 00:12:20.863 { 00:12:20.863 "name": "BaseBdev4", 00:12:20.863 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:20.863 "is_configured": true, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 65536 00:12:20.863 } 00:12:20.863 ] 00:12:20.863 }' 00:12:20.863 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.863 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.432 [2024-12-13 08:23:33.645674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.432 "name": "Existed_Raid", 00:12:21.432 "aliases": [ 00:12:21.432 "b39920f6-10e7-4e1f-a682-54214be8767c" 00:12:21.432 ], 00:12:21.432 "product_name": "Raid Volume", 00:12:21.432 "block_size": 512, 00:12:21.432 "num_blocks": 65536, 00:12:21.432 "uuid": "b39920f6-10e7-4e1f-a682-54214be8767c", 00:12:21.432 "assigned_rate_limits": { 00:12:21.432 "rw_ios_per_sec": 0, 00:12:21.432 "rw_mbytes_per_sec": 0, 00:12:21.432 "r_mbytes_per_sec": 0, 00:12:21.432 "w_mbytes_per_sec": 0 00:12:21.432 }, 00:12:21.432 "claimed": false, 00:12:21.432 "zoned": false, 00:12:21.432 "supported_io_types": { 00:12:21.432 "read": true, 00:12:21.432 "write": true, 00:12:21.432 "unmap": false, 00:12:21.432 "flush": false, 00:12:21.432 "reset": true, 00:12:21.432 "nvme_admin": false, 00:12:21.432 "nvme_io": false, 00:12:21.432 "nvme_io_md": false, 00:12:21.432 "write_zeroes": true, 00:12:21.432 "zcopy": false, 00:12:21.432 "get_zone_info": false, 00:12:21.432 "zone_management": false, 00:12:21.432 "zone_append": false, 00:12:21.432 "compare": false, 00:12:21.432 "compare_and_write": false, 00:12:21.432 "abort": false, 00:12:21.432 "seek_hole": false, 00:12:21.432 "seek_data": false, 00:12:21.432 "copy": false, 00:12:21.432 "nvme_iov_md": false 00:12:21.432 }, 00:12:21.432 "memory_domains": [ 00:12:21.432 { 00:12:21.432 "dma_device_id": "system", 00:12:21.432 "dma_device_type": 1 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.432 "dma_device_type": 2 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "dma_device_id": "system", 00:12:21.432 "dma_device_type": 1 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.432 "dma_device_type": 2 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "dma_device_id": "system", 00:12:21.432 "dma_device_type": 1 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.432 "dma_device_type": 2 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "dma_device_id": "system", 00:12:21.432 "dma_device_type": 1 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.432 "dma_device_type": 2 00:12:21.432 } 00:12:21.432 ], 00:12:21.432 "driver_specific": { 00:12:21.432 "raid": { 00:12:21.432 "uuid": "b39920f6-10e7-4e1f-a682-54214be8767c", 00:12:21.432 "strip_size_kb": 0, 00:12:21.432 "state": "online", 00:12:21.432 "raid_level": "raid1", 00:12:21.432 "superblock": false, 00:12:21.432 "num_base_bdevs": 4, 00:12:21.432 "num_base_bdevs_discovered": 4, 00:12:21.432 "num_base_bdevs_operational": 4, 00:12:21.432 "base_bdevs_list": [ 00:12:21.432 { 00:12:21.432 "name": "NewBaseBdev", 00:12:21.432 "uuid": "e633d39b-400c-4f2f-aa61-a68077565ed8", 00:12:21.432 "is_configured": true, 00:12:21.432 "data_offset": 0, 00:12:21.432 "data_size": 65536 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "name": "BaseBdev2", 00:12:21.432 "uuid": "07596e39-fb3d-4655-af94-dcb61db1fcd3", 00:12:21.432 "is_configured": true, 00:12:21.432 "data_offset": 0, 00:12:21.432 "data_size": 65536 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "name": "BaseBdev3", 00:12:21.432 "uuid": "be861e84-ff8a-454f-8408-b84d97717520", 00:12:21.432 "is_configured": true, 00:12:21.432 "data_offset": 0, 00:12:21.432 "data_size": 65536 00:12:21.432 }, 00:12:21.432 { 00:12:21.432 "name": "BaseBdev4", 00:12:21.432 "uuid": "d59d53fc-bf4e-4c39-8a4b-49f2b10012eb", 00:12:21.432 "is_configured": true, 00:12:21.432 "data_offset": 0, 00:12:21.432 "data_size": 65536 00:12:21.432 } 00:12:21.432 ] 00:12:21.432 } 00:12:21.432 } 00:12:21.432 }' 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:21.432 BaseBdev2 00:12:21.432 BaseBdev3 00:12:21.432 BaseBdev4' 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.432 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.693 [2024-12-13 08:23:33.972791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.693 [2024-12-13 08:23:33.972830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.693 [2024-12-13 08:23:33.972958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.693 [2024-12-13 08:23:33.973338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.693 [2024-12-13 08:23:33.973355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73371 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73371 ']' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73371 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.693 08:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73371 00:12:21.693 killing process with pid 73371 00:12:21.693 08:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.693 08:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.693 08:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73371' 00:12:21.693 08:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73371 00:12:21.693 [2024-12-13 08:23:34.020853] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.693 08:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73371 00:12:22.262 [2024-12-13 08:23:34.448316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:23.640 00:12:23.640 real 0m11.984s 00:12:23.640 user 0m19.020s 00:12:23.640 sys 0m2.130s 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.640 ************************************ 00:12:23.640 END TEST raid_state_function_test 00:12:23.640 ************************************ 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.640 08:23:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:23.640 08:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:23.640 08:23:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.640 08:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.640 ************************************ 00:12:23.640 START TEST raid_state_function_test_sb 00:12:23.640 ************************************ 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:23.640 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74048 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74048' 00:12:23.641 Process raid pid: 74048 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74048 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74048 ']' 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.641 08:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.641 [2024-12-13 08:23:35.788344] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:23.641 [2024-12-13 08:23:35.788549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.641 [2024-12-13 08:23:35.965464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.901 [2024-12-13 08:23:36.082798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.160 [2024-12-13 08:23:36.286759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.160 [2024-12-13 08:23:36.286894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.420 [2024-12-13 08:23:36.655947] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.420 [2024-12-13 08:23:36.656007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.420 [2024-12-13 08:23:36.656019] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:24.420 [2024-12-13 08:23:36.656030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:24.420 [2024-12-13 08:23:36.656037] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:24.420 [2024-12-13 08:23:36.656048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:24.420 [2024-12-13 08:23:36.656055] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:24.420 [2024-12-13 08:23:36.656065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.420 "name": "Existed_Raid", 00:12:24.420 "uuid": "1f43d0bf-9ebe-406b-979c-bebc173623c0", 00:12:24.420 "strip_size_kb": 0, 00:12:24.420 "state": "configuring", 00:12:24.420 "raid_level": "raid1", 00:12:24.420 "superblock": true, 00:12:24.420 "num_base_bdevs": 4, 00:12:24.420 "num_base_bdevs_discovered": 0, 00:12:24.420 "num_base_bdevs_operational": 4, 00:12:24.420 "base_bdevs_list": [ 00:12:24.420 { 00:12:24.420 "name": "BaseBdev1", 00:12:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.420 "is_configured": false, 00:12:24.420 "data_offset": 0, 00:12:24.420 "data_size": 0 00:12:24.420 }, 00:12:24.420 { 00:12:24.420 "name": "BaseBdev2", 00:12:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.420 "is_configured": false, 00:12:24.420 "data_offset": 0, 00:12:24.420 "data_size": 0 00:12:24.420 }, 00:12:24.420 { 00:12:24.420 "name": "BaseBdev3", 00:12:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.420 "is_configured": false, 00:12:24.420 "data_offset": 0, 00:12:24.420 "data_size": 0 00:12:24.420 }, 00:12:24.420 { 00:12:24.420 "name": "BaseBdev4", 00:12:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.420 "is_configured": false, 00:12:24.420 "data_offset": 0, 00:12:24.420 "data_size": 0 00:12:24.420 } 00:12:24.420 ] 00:12:24.420 }' 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.420 08:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 [2024-12-13 08:23:37.067190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:24.990 [2024-12-13 08:23:37.067319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 [2024-12-13 08:23:37.079168] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.990 [2024-12-13 08:23:37.079259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.990 [2024-12-13 08:23:37.079286] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:24.990 [2024-12-13 08:23:37.079310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:24.990 [2024-12-13 08:23:37.079328] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:24.990 [2024-12-13 08:23:37.079350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:24.990 [2024-12-13 08:23:37.079368] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:24.990 [2024-12-13 08:23:37.079389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 [2024-12-13 08:23:37.126868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.990 BaseBdev1 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.990 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.990 [ 00:12:24.990 { 00:12:24.990 "name": "BaseBdev1", 00:12:24.990 "aliases": [ 00:12:24.990 "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8" 00:12:24.990 ], 00:12:24.990 "product_name": "Malloc disk", 00:12:24.990 "block_size": 512, 00:12:24.990 "num_blocks": 65536, 00:12:24.990 "uuid": "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8", 00:12:24.991 "assigned_rate_limits": { 00:12:24.991 "rw_ios_per_sec": 0, 00:12:24.991 "rw_mbytes_per_sec": 0, 00:12:24.991 "r_mbytes_per_sec": 0, 00:12:24.991 "w_mbytes_per_sec": 0 00:12:24.991 }, 00:12:24.991 "claimed": true, 00:12:24.991 "claim_type": "exclusive_write", 00:12:24.991 "zoned": false, 00:12:24.991 "supported_io_types": { 00:12:24.991 "read": true, 00:12:24.991 "write": true, 00:12:24.991 "unmap": true, 00:12:24.991 "flush": true, 00:12:24.991 "reset": true, 00:12:24.991 "nvme_admin": false, 00:12:24.991 "nvme_io": false, 00:12:24.991 "nvme_io_md": false, 00:12:24.991 "write_zeroes": true, 00:12:24.991 "zcopy": true, 00:12:24.991 "get_zone_info": false, 00:12:24.991 "zone_management": false, 00:12:24.991 "zone_append": false, 00:12:24.991 "compare": false, 00:12:24.991 "compare_and_write": false, 00:12:24.991 "abort": true, 00:12:24.991 "seek_hole": false, 00:12:24.991 "seek_data": false, 00:12:24.991 "copy": true, 00:12:24.991 "nvme_iov_md": false 00:12:24.991 }, 00:12:24.991 "memory_domains": [ 00:12:24.991 { 00:12:24.991 "dma_device_id": "system", 00:12:24.991 "dma_device_type": 1 00:12:24.991 }, 00:12:24.991 { 00:12:24.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.991 "dma_device_type": 2 00:12:24.991 } 00:12:24.991 ], 00:12:24.991 "driver_specific": {} 00:12:24.991 } 00:12:24.991 ] 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.991 "name": "Existed_Raid", 00:12:24.991 "uuid": "0ab46fdf-2477-42b9-be6a-6d53a3e512c8", 00:12:24.991 "strip_size_kb": 0, 00:12:24.991 "state": "configuring", 00:12:24.991 "raid_level": "raid1", 00:12:24.991 "superblock": true, 00:12:24.991 "num_base_bdevs": 4, 00:12:24.991 "num_base_bdevs_discovered": 1, 00:12:24.991 "num_base_bdevs_operational": 4, 00:12:24.991 "base_bdevs_list": [ 00:12:24.991 { 00:12:24.991 "name": "BaseBdev1", 00:12:24.991 "uuid": "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8", 00:12:24.991 "is_configured": true, 00:12:24.991 "data_offset": 2048, 00:12:24.991 "data_size": 63488 00:12:24.991 }, 00:12:24.991 { 00:12:24.991 "name": "BaseBdev2", 00:12:24.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.991 "is_configured": false, 00:12:24.991 "data_offset": 0, 00:12:24.991 "data_size": 0 00:12:24.991 }, 00:12:24.991 { 00:12:24.991 "name": "BaseBdev3", 00:12:24.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.991 "is_configured": false, 00:12:24.991 "data_offset": 0, 00:12:24.991 "data_size": 0 00:12:24.991 }, 00:12:24.991 { 00:12:24.991 "name": "BaseBdev4", 00:12:24.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.991 "is_configured": false, 00:12:24.991 "data_offset": 0, 00:12:24.991 "data_size": 0 00:12:24.991 } 00:12:24.991 ] 00:12:24.991 }' 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.991 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 [2024-12-13 08:23:37.646066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:25.561 [2024-12-13 08:23:37.646194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 [2024-12-13 08:23:37.658127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.561 [2024-12-13 08:23:37.660116] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:25.561 [2024-12-13 08:23:37.660174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:25.561 [2024-12-13 08:23:37.660185] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:25.561 [2024-12-13 08:23:37.660197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:25.561 [2024-12-13 08:23:37.660205] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:25.561 [2024-12-13 08:23:37.660215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.561 "name": "Existed_Raid", 00:12:25.561 "uuid": "3c479183-8c50-4f7c-b94f-7edea0e534f3", 00:12:25.561 "strip_size_kb": 0, 00:12:25.561 "state": "configuring", 00:12:25.561 "raid_level": "raid1", 00:12:25.561 "superblock": true, 00:12:25.561 "num_base_bdevs": 4, 00:12:25.561 "num_base_bdevs_discovered": 1, 00:12:25.561 "num_base_bdevs_operational": 4, 00:12:25.561 "base_bdevs_list": [ 00:12:25.561 { 00:12:25.561 "name": "BaseBdev1", 00:12:25.561 "uuid": "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8", 00:12:25.561 "is_configured": true, 00:12:25.561 "data_offset": 2048, 00:12:25.561 "data_size": 63488 00:12:25.561 }, 00:12:25.561 { 00:12:25.561 "name": "BaseBdev2", 00:12:25.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.561 "is_configured": false, 00:12:25.561 "data_offset": 0, 00:12:25.561 "data_size": 0 00:12:25.561 }, 00:12:25.561 { 00:12:25.561 "name": "BaseBdev3", 00:12:25.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.561 "is_configured": false, 00:12:25.561 "data_offset": 0, 00:12:25.561 "data_size": 0 00:12:25.561 }, 00:12:25.561 { 00:12:25.561 "name": "BaseBdev4", 00:12:25.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.561 "is_configured": false, 00:12:25.561 "data_offset": 0, 00:12:25.561 "data_size": 0 00:12:25.561 } 00:12:25.561 ] 00:12:25.561 }' 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.561 08:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.821 [2024-12-13 08:23:38.154063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.821 BaseBdev2 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.821 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.080 [ 00:12:26.080 { 00:12:26.080 "name": "BaseBdev2", 00:12:26.080 "aliases": [ 00:12:26.080 "cb853e5d-c750-4387-ab6f-b44f0bebb4b8" 00:12:26.080 ], 00:12:26.080 "product_name": "Malloc disk", 00:12:26.080 "block_size": 512, 00:12:26.080 "num_blocks": 65536, 00:12:26.080 "uuid": "cb853e5d-c750-4387-ab6f-b44f0bebb4b8", 00:12:26.080 "assigned_rate_limits": { 00:12:26.080 "rw_ios_per_sec": 0, 00:12:26.080 "rw_mbytes_per_sec": 0, 00:12:26.080 "r_mbytes_per_sec": 0, 00:12:26.080 "w_mbytes_per_sec": 0 00:12:26.080 }, 00:12:26.080 "claimed": true, 00:12:26.080 "claim_type": "exclusive_write", 00:12:26.080 "zoned": false, 00:12:26.080 "supported_io_types": { 00:12:26.080 "read": true, 00:12:26.080 "write": true, 00:12:26.080 "unmap": true, 00:12:26.080 "flush": true, 00:12:26.080 "reset": true, 00:12:26.080 "nvme_admin": false, 00:12:26.080 "nvme_io": false, 00:12:26.080 "nvme_io_md": false, 00:12:26.080 "write_zeroes": true, 00:12:26.080 "zcopy": true, 00:12:26.080 "get_zone_info": false, 00:12:26.080 "zone_management": false, 00:12:26.080 "zone_append": false, 00:12:26.080 "compare": false, 00:12:26.080 "compare_and_write": false, 00:12:26.080 "abort": true, 00:12:26.080 "seek_hole": false, 00:12:26.080 "seek_data": false, 00:12:26.080 "copy": true, 00:12:26.080 "nvme_iov_md": false 00:12:26.080 }, 00:12:26.080 "memory_domains": [ 00:12:26.080 { 00:12:26.080 "dma_device_id": "system", 00:12:26.080 "dma_device_type": 1 00:12:26.080 }, 00:12:26.080 { 00:12:26.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.081 "dma_device_type": 2 00:12:26.081 } 00:12:26.081 ], 00:12:26.081 "driver_specific": {} 00:12:26.081 } 00:12:26.081 ] 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.081 "name": "Existed_Raid", 00:12:26.081 "uuid": "3c479183-8c50-4f7c-b94f-7edea0e534f3", 00:12:26.081 "strip_size_kb": 0, 00:12:26.081 "state": "configuring", 00:12:26.081 "raid_level": "raid1", 00:12:26.081 "superblock": true, 00:12:26.081 "num_base_bdevs": 4, 00:12:26.081 "num_base_bdevs_discovered": 2, 00:12:26.081 "num_base_bdevs_operational": 4, 00:12:26.081 "base_bdevs_list": [ 00:12:26.081 { 00:12:26.081 "name": "BaseBdev1", 00:12:26.081 "uuid": "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8", 00:12:26.081 "is_configured": true, 00:12:26.081 "data_offset": 2048, 00:12:26.081 "data_size": 63488 00:12:26.081 }, 00:12:26.081 { 00:12:26.081 "name": "BaseBdev2", 00:12:26.081 "uuid": "cb853e5d-c750-4387-ab6f-b44f0bebb4b8", 00:12:26.081 "is_configured": true, 00:12:26.081 "data_offset": 2048, 00:12:26.081 "data_size": 63488 00:12:26.081 }, 00:12:26.081 { 00:12:26.081 "name": "BaseBdev3", 00:12:26.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.081 "is_configured": false, 00:12:26.081 "data_offset": 0, 00:12:26.081 "data_size": 0 00:12:26.081 }, 00:12:26.081 { 00:12:26.081 "name": "BaseBdev4", 00:12:26.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.081 "is_configured": false, 00:12:26.081 "data_offset": 0, 00:12:26.081 "data_size": 0 00:12:26.081 } 00:12:26.081 ] 00:12:26.081 }' 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.081 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.341 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:26.341 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.341 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.601 [2024-12-13 08:23:38.711934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.601 BaseBdev3 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.601 [ 00:12:26.601 { 00:12:26.601 "name": "BaseBdev3", 00:12:26.601 "aliases": [ 00:12:26.601 "3573222e-66d6-49f7-9609-1d7092e27eba" 00:12:26.601 ], 00:12:26.601 "product_name": "Malloc disk", 00:12:26.601 "block_size": 512, 00:12:26.601 "num_blocks": 65536, 00:12:26.601 "uuid": "3573222e-66d6-49f7-9609-1d7092e27eba", 00:12:26.601 "assigned_rate_limits": { 00:12:26.601 "rw_ios_per_sec": 0, 00:12:26.601 "rw_mbytes_per_sec": 0, 00:12:26.601 "r_mbytes_per_sec": 0, 00:12:26.601 "w_mbytes_per_sec": 0 00:12:26.601 }, 00:12:26.601 "claimed": true, 00:12:26.601 "claim_type": "exclusive_write", 00:12:26.601 "zoned": false, 00:12:26.601 "supported_io_types": { 00:12:26.601 "read": true, 00:12:26.601 "write": true, 00:12:26.601 "unmap": true, 00:12:26.601 "flush": true, 00:12:26.601 "reset": true, 00:12:26.601 "nvme_admin": false, 00:12:26.601 "nvme_io": false, 00:12:26.601 "nvme_io_md": false, 00:12:26.601 "write_zeroes": true, 00:12:26.601 "zcopy": true, 00:12:26.601 "get_zone_info": false, 00:12:26.601 "zone_management": false, 00:12:26.601 "zone_append": false, 00:12:26.601 "compare": false, 00:12:26.601 "compare_and_write": false, 00:12:26.601 "abort": true, 00:12:26.601 "seek_hole": false, 00:12:26.601 "seek_data": false, 00:12:26.601 "copy": true, 00:12:26.601 "nvme_iov_md": false 00:12:26.601 }, 00:12:26.601 "memory_domains": [ 00:12:26.601 { 00:12:26.601 "dma_device_id": "system", 00:12:26.601 "dma_device_type": 1 00:12:26.601 }, 00:12:26.601 { 00:12:26.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.601 "dma_device_type": 2 00:12:26.601 } 00:12:26.601 ], 00:12:26.601 "driver_specific": {} 00:12:26.601 } 00:12:26.601 ] 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.601 "name": "Existed_Raid", 00:12:26.601 "uuid": "3c479183-8c50-4f7c-b94f-7edea0e534f3", 00:12:26.601 "strip_size_kb": 0, 00:12:26.601 "state": "configuring", 00:12:26.601 "raid_level": "raid1", 00:12:26.601 "superblock": true, 00:12:26.601 "num_base_bdevs": 4, 00:12:26.601 "num_base_bdevs_discovered": 3, 00:12:26.601 "num_base_bdevs_operational": 4, 00:12:26.601 "base_bdevs_list": [ 00:12:26.601 { 00:12:26.601 "name": "BaseBdev1", 00:12:26.601 "uuid": "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8", 00:12:26.601 "is_configured": true, 00:12:26.601 "data_offset": 2048, 00:12:26.601 "data_size": 63488 00:12:26.601 }, 00:12:26.601 { 00:12:26.601 "name": "BaseBdev2", 00:12:26.601 "uuid": "cb853e5d-c750-4387-ab6f-b44f0bebb4b8", 00:12:26.601 "is_configured": true, 00:12:26.601 "data_offset": 2048, 00:12:26.601 "data_size": 63488 00:12:26.601 }, 00:12:26.601 { 00:12:26.601 "name": "BaseBdev3", 00:12:26.601 "uuid": "3573222e-66d6-49f7-9609-1d7092e27eba", 00:12:26.601 "is_configured": true, 00:12:26.601 "data_offset": 2048, 00:12:26.601 "data_size": 63488 00:12:26.601 }, 00:12:26.601 { 00:12:26.601 "name": "BaseBdev4", 00:12:26.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.601 "is_configured": false, 00:12:26.601 "data_offset": 0, 00:12:26.601 "data_size": 0 00:12:26.601 } 00:12:26.601 ] 00:12:26.601 }' 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.601 08:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.861 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:26.861 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.861 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.121 [2024-12-13 08:23:39.249449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.121 [2024-12-13 08:23:39.249817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.121 [2024-12-13 08:23:39.249875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.121 [2024-12-13 08:23:39.250173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:27.121 [2024-12-13 08:23:39.250387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.121 [2024-12-13 08:23:39.250437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:27.121 BaseBdev4 00:12:27.121 [2024-12-13 08:23:39.250614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.121 [ 00:12:27.121 { 00:12:27.121 "name": "BaseBdev4", 00:12:27.121 "aliases": [ 00:12:27.121 "220b7ba6-4def-405b-8ad9-4e78a2660d5c" 00:12:27.121 ], 00:12:27.121 "product_name": "Malloc disk", 00:12:27.121 "block_size": 512, 00:12:27.121 "num_blocks": 65536, 00:12:27.121 "uuid": "220b7ba6-4def-405b-8ad9-4e78a2660d5c", 00:12:27.121 "assigned_rate_limits": { 00:12:27.121 "rw_ios_per_sec": 0, 00:12:27.121 "rw_mbytes_per_sec": 0, 00:12:27.121 "r_mbytes_per_sec": 0, 00:12:27.121 "w_mbytes_per_sec": 0 00:12:27.121 }, 00:12:27.121 "claimed": true, 00:12:27.121 "claim_type": "exclusive_write", 00:12:27.121 "zoned": false, 00:12:27.121 "supported_io_types": { 00:12:27.121 "read": true, 00:12:27.121 "write": true, 00:12:27.121 "unmap": true, 00:12:27.121 "flush": true, 00:12:27.121 "reset": true, 00:12:27.121 "nvme_admin": false, 00:12:27.121 "nvme_io": false, 00:12:27.121 "nvme_io_md": false, 00:12:27.121 "write_zeroes": true, 00:12:27.121 "zcopy": true, 00:12:27.121 "get_zone_info": false, 00:12:27.121 "zone_management": false, 00:12:27.121 "zone_append": false, 00:12:27.121 "compare": false, 00:12:27.121 "compare_and_write": false, 00:12:27.121 "abort": true, 00:12:27.121 "seek_hole": false, 00:12:27.121 "seek_data": false, 00:12:27.121 "copy": true, 00:12:27.121 "nvme_iov_md": false 00:12:27.121 }, 00:12:27.121 "memory_domains": [ 00:12:27.121 { 00:12:27.121 "dma_device_id": "system", 00:12:27.121 "dma_device_type": 1 00:12:27.121 }, 00:12:27.121 { 00:12:27.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.121 "dma_device_type": 2 00:12:27.121 } 00:12:27.121 ], 00:12:27.121 "driver_specific": {} 00:12:27.121 } 00:12:27.121 ] 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.121 "name": "Existed_Raid", 00:12:27.121 "uuid": "3c479183-8c50-4f7c-b94f-7edea0e534f3", 00:12:27.121 "strip_size_kb": 0, 00:12:27.121 "state": "online", 00:12:27.121 "raid_level": "raid1", 00:12:27.121 "superblock": true, 00:12:27.121 "num_base_bdevs": 4, 00:12:27.121 "num_base_bdevs_discovered": 4, 00:12:27.121 "num_base_bdevs_operational": 4, 00:12:27.121 "base_bdevs_list": [ 00:12:27.121 { 00:12:27.121 "name": "BaseBdev1", 00:12:27.121 "uuid": "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8", 00:12:27.121 "is_configured": true, 00:12:27.121 "data_offset": 2048, 00:12:27.121 "data_size": 63488 00:12:27.121 }, 00:12:27.121 { 00:12:27.121 "name": "BaseBdev2", 00:12:27.121 "uuid": "cb853e5d-c750-4387-ab6f-b44f0bebb4b8", 00:12:27.121 "is_configured": true, 00:12:27.121 "data_offset": 2048, 00:12:27.121 "data_size": 63488 00:12:27.121 }, 00:12:27.121 { 00:12:27.121 "name": "BaseBdev3", 00:12:27.121 "uuid": "3573222e-66d6-49f7-9609-1d7092e27eba", 00:12:27.121 "is_configured": true, 00:12:27.121 "data_offset": 2048, 00:12:27.121 "data_size": 63488 00:12:27.121 }, 00:12:27.121 { 00:12:27.121 "name": "BaseBdev4", 00:12:27.121 "uuid": "220b7ba6-4def-405b-8ad9-4e78a2660d5c", 00:12:27.121 "is_configured": true, 00:12:27.121 "data_offset": 2048, 00:12:27.121 "data_size": 63488 00:12:27.121 } 00:12:27.121 ] 00:12:27.121 }' 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.121 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.691 [2024-12-13 08:23:39.776941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.691 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.691 "name": "Existed_Raid", 00:12:27.691 "aliases": [ 00:12:27.691 "3c479183-8c50-4f7c-b94f-7edea0e534f3" 00:12:27.691 ], 00:12:27.691 "product_name": "Raid Volume", 00:12:27.691 "block_size": 512, 00:12:27.691 "num_blocks": 63488, 00:12:27.691 "uuid": "3c479183-8c50-4f7c-b94f-7edea0e534f3", 00:12:27.691 "assigned_rate_limits": { 00:12:27.691 "rw_ios_per_sec": 0, 00:12:27.691 "rw_mbytes_per_sec": 0, 00:12:27.691 "r_mbytes_per_sec": 0, 00:12:27.691 "w_mbytes_per_sec": 0 00:12:27.691 }, 00:12:27.691 "claimed": false, 00:12:27.691 "zoned": false, 00:12:27.691 "supported_io_types": { 00:12:27.691 "read": true, 00:12:27.691 "write": true, 00:12:27.691 "unmap": false, 00:12:27.691 "flush": false, 00:12:27.691 "reset": true, 00:12:27.691 "nvme_admin": false, 00:12:27.691 "nvme_io": false, 00:12:27.691 "nvme_io_md": false, 00:12:27.691 "write_zeroes": true, 00:12:27.691 "zcopy": false, 00:12:27.691 "get_zone_info": false, 00:12:27.691 "zone_management": false, 00:12:27.691 "zone_append": false, 00:12:27.691 "compare": false, 00:12:27.691 "compare_and_write": false, 00:12:27.691 "abort": false, 00:12:27.691 "seek_hole": false, 00:12:27.691 "seek_data": false, 00:12:27.691 "copy": false, 00:12:27.691 "nvme_iov_md": false 00:12:27.691 }, 00:12:27.691 "memory_domains": [ 00:12:27.691 { 00:12:27.691 "dma_device_id": "system", 00:12:27.691 "dma_device_type": 1 00:12:27.691 }, 00:12:27.691 { 00:12:27.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.691 "dma_device_type": 2 00:12:27.691 }, 00:12:27.691 { 00:12:27.691 "dma_device_id": "system", 00:12:27.691 "dma_device_type": 1 00:12:27.691 }, 00:12:27.691 { 00:12:27.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.691 "dma_device_type": 2 00:12:27.691 }, 00:12:27.691 { 00:12:27.691 "dma_device_id": "system", 00:12:27.691 "dma_device_type": 1 00:12:27.691 }, 00:12:27.691 { 00:12:27.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.691 "dma_device_type": 2 00:12:27.691 }, 00:12:27.691 { 00:12:27.691 "dma_device_id": "system", 00:12:27.691 "dma_device_type": 1 00:12:27.691 }, 00:12:27.691 { 00:12:27.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.691 "dma_device_type": 2 00:12:27.691 } 00:12:27.691 ], 00:12:27.691 "driver_specific": { 00:12:27.691 "raid": { 00:12:27.691 "uuid": "3c479183-8c50-4f7c-b94f-7edea0e534f3", 00:12:27.691 "strip_size_kb": 0, 00:12:27.691 "state": "online", 00:12:27.691 "raid_level": "raid1", 00:12:27.692 "superblock": true, 00:12:27.692 "num_base_bdevs": 4, 00:12:27.692 "num_base_bdevs_discovered": 4, 00:12:27.692 "num_base_bdevs_operational": 4, 00:12:27.692 "base_bdevs_list": [ 00:12:27.692 { 00:12:27.692 "name": "BaseBdev1", 00:12:27.692 "uuid": "3c83e4e0-bad6-4abd-ba2b-58d39a2db1c8", 00:12:27.692 "is_configured": true, 00:12:27.692 "data_offset": 2048, 00:12:27.692 "data_size": 63488 00:12:27.692 }, 00:12:27.692 { 00:12:27.692 "name": "BaseBdev2", 00:12:27.692 "uuid": "cb853e5d-c750-4387-ab6f-b44f0bebb4b8", 00:12:27.692 "is_configured": true, 00:12:27.692 "data_offset": 2048, 00:12:27.692 "data_size": 63488 00:12:27.692 }, 00:12:27.692 { 00:12:27.692 "name": "BaseBdev3", 00:12:27.692 "uuid": "3573222e-66d6-49f7-9609-1d7092e27eba", 00:12:27.692 "is_configured": true, 00:12:27.692 "data_offset": 2048, 00:12:27.692 "data_size": 63488 00:12:27.692 }, 00:12:27.692 { 00:12:27.692 "name": "BaseBdev4", 00:12:27.692 "uuid": "220b7ba6-4def-405b-8ad9-4e78a2660d5c", 00:12:27.692 "is_configured": true, 00:12:27.692 "data_offset": 2048, 00:12:27.692 "data_size": 63488 00:12:27.692 } 00:12:27.692 ] 00:12:27.692 } 00:12:27.692 } 00:12:27.692 }' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:27.692 BaseBdev2 00:12:27.692 BaseBdev3 00:12:27.692 BaseBdev4' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.692 08:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.692 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.952 [2024-12-13 08:23:40.084187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.952 "name": "Existed_Raid", 00:12:27.952 "uuid": "3c479183-8c50-4f7c-b94f-7edea0e534f3", 00:12:27.952 "strip_size_kb": 0, 00:12:27.952 "state": "online", 00:12:27.952 "raid_level": "raid1", 00:12:27.952 "superblock": true, 00:12:27.952 "num_base_bdevs": 4, 00:12:27.952 "num_base_bdevs_discovered": 3, 00:12:27.952 "num_base_bdevs_operational": 3, 00:12:27.952 "base_bdevs_list": [ 00:12:27.952 { 00:12:27.952 "name": null, 00:12:27.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.952 "is_configured": false, 00:12:27.952 "data_offset": 0, 00:12:27.952 "data_size": 63488 00:12:27.952 }, 00:12:27.952 { 00:12:27.952 "name": "BaseBdev2", 00:12:27.952 "uuid": "cb853e5d-c750-4387-ab6f-b44f0bebb4b8", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 2048, 00:12:27.952 "data_size": 63488 00:12:27.952 }, 00:12:27.952 { 00:12:27.952 "name": "BaseBdev3", 00:12:27.952 "uuid": "3573222e-66d6-49f7-9609-1d7092e27eba", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 2048, 00:12:27.952 "data_size": 63488 00:12:27.952 }, 00:12:27.952 { 00:12:27.952 "name": "BaseBdev4", 00:12:27.952 "uuid": "220b7ba6-4def-405b-8ad9-4e78a2660d5c", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 2048, 00:12:27.952 "data_size": 63488 00:12:27.952 } 00:12:27.952 ] 00:12:27.952 }' 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.952 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.521 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:28.521 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:28.521 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.521 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.521 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.522 [2024-12-13 08:23:40.672249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.522 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.522 [2024-12-13 08:23:40.832294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.782 08:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.782 [2024-12-13 08:23:40.992611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:28.782 [2024-12-13 08:23:40.992774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.782 [2024-12-13 08:23:41.098937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.782 [2024-12-13 08:23:41.099107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.782 [2024-12-13 08:23:41.099184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.782 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.042 BaseBdev2 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.042 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.042 [ 00:12:29.042 { 00:12:29.042 "name": "BaseBdev2", 00:12:29.042 "aliases": [ 00:12:29.042 "4415522b-76d4-438f-88b0-8d7d16f8c606" 00:12:29.042 ], 00:12:29.042 "product_name": "Malloc disk", 00:12:29.042 "block_size": 512, 00:12:29.042 "num_blocks": 65536, 00:12:29.042 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:29.042 "assigned_rate_limits": { 00:12:29.042 "rw_ios_per_sec": 0, 00:12:29.042 "rw_mbytes_per_sec": 0, 00:12:29.042 "r_mbytes_per_sec": 0, 00:12:29.042 "w_mbytes_per_sec": 0 00:12:29.042 }, 00:12:29.042 "claimed": false, 00:12:29.042 "zoned": false, 00:12:29.042 "supported_io_types": { 00:12:29.042 "read": true, 00:12:29.042 "write": true, 00:12:29.042 "unmap": true, 00:12:29.042 "flush": true, 00:12:29.042 "reset": true, 00:12:29.042 "nvme_admin": false, 00:12:29.042 "nvme_io": false, 00:12:29.042 "nvme_io_md": false, 00:12:29.042 "write_zeroes": true, 00:12:29.042 "zcopy": true, 00:12:29.042 "get_zone_info": false, 00:12:29.042 "zone_management": false, 00:12:29.042 "zone_append": false, 00:12:29.042 "compare": false, 00:12:29.043 "compare_and_write": false, 00:12:29.043 "abort": true, 00:12:29.043 "seek_hole": false, 00:12:29.043 "seek_data": false, 00:12:29.043 "copy": true, 00:12:29.043 "nvme_iov_md": false 00:12:29.043 }, 00:12:29.043 "memory_domains": [ 00:12:29.043 { 00:12:29.043 "dma_device_id": "system", 00:12:29.043 "dma_device_type": 1 00:12:29.043 }, 00:12:29.043 { 00:12:29.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.043 "dma_device_type": 2 00:12:29.043 } 00:12:29.043 ], 00:12:29.043 "driver_specific": {} 00:12:29.043 } 00:12:29.043 ] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 BaseBdev3 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 [ 00:12:29.043 { 00:12:29.043 "name": "BaseBdev3", 00:12:29.043 "aliases": [ 00:12:29.043 "1f65765e-b100-46d0-8fe1-1677975d6c98" 00:12:29.043 ], 00:12:29.043 "product_name": "Malloc disk", 00:12:29.043 "block_size": 512, 00:12:29.043 "num_blocks": 65536, 00:12:29.043 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:29.043 "assigned_rate_limits": { 00:12:29.043 "rw_ios_per_sec": 0, 00:12:29.043 "rw_mbytes_per_sec": 0, 00:12:29.043 "r_mbytes_per_sec": 0, 00:12:29.043 "w_mbytes_per_sec": 0 00:12:29.043 }, 00:12:29.043 "claimed": false, 00:12:29.043 "zoned": false, 00:12:29.043 "supported_io_types": { 00:12:29.043 "read": true, 00:12:29.043 "write": true, 00:12:29.043 "unmap": true, 00:12:29.043 "flush": true, 00:12:29.043 "reset": true, 00:12:29.043 "nvme_admin": false, 00:12:29.043 "nvme_io": false, 00:12:29.043 "nvme_io_md": false, 00:12:29.043 "write_zeroes": true, 00:12:29.043 "zcopy": true, 00:12:29.043 "get_zone_info": false, 00:12:29.043 "zone_management": false, 00:12:29.043 "zone_append": false, 00:12:29.043 "compare": false, 00:12:29.043 "compare_and_write": false, 00:12:29.043 "abort": true, 00:12:29.043 "seek_hole": false, 00:12:29.043 "seek_data": false, 00:12:29.043 "copy": true, 00:12:29.043 "nvme_iov_md": false 00:12:29.043 }, 00:12:29.043 "memory_domains": [ 00:12:29.043 { 00:12:29.043 "dma_device_id": "system", 00:12:29.043 "dma_device_type": 1 00:12:29.043 }, 00:12:29.043 { 00:12:29.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.043 "dma_device_type": 2 00:12:29.043 } 00:12:29.043 ], 00:12:29.043 "driver_specific": {} 00:12:29.043 } 00:12:29.043 ] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 BaseBdev4 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 [ 00:12:29.043 { 00:12:29.043 "name": "BaseBdev4", 00:12:29.043 "aliases": [ 00:12:29.043 "195ee575-30f9-4e38-9706-c3dc4ecb42a2" 00:12:29.043 ], 00:12:29.043 "product_name": "Malloc disk", 00:12:29.043 "block_size": 512, 00:12:29.043 "num_blocks": 65536, 00:12:29.043 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:29.043 "assigned_rate_limits": { 00:12:29.043 "rw_ios_per_sec": 0, 00:12:29.043 "rw_mbytes_per_sec": 0, 00:12:29.043 "r_mbytes_per_sec": 0, 00:12:29.043 "w_mbytes_per_sec": 0 00:12:29.043 }, 00:12:29.043 "claimed": false, 00:12:29.043 "zoned": false, 00:12:29.043 "supported_io_types": { 00:12:29.043 "read": true, 00:12:29.043 "write": true, 00:12:29.043 "unmap": true, 00:12:29.043 "flush": true, 00:12:29.043 "reset": true, 00:12:29.043 "nvme_admin": false, 00:12:29.043 "nvme_io": false, 00:12:29.043 "nvme_io_md": false, 00:12:29.043 "write_zeroes": true, 00:12:29.043 "zcopy": true, 00:12:29.043 "get_zone_info": false, 00:12:29.043 "zone_management": false, 00:12:29.043 "zone_append": false, 00:12:29.043 "compare": false, 00:12:29.043 "compare_and_write": false, 00:12:29.043 "abort": true, 00:12:29.043 "seek_hole": false, 00:12:29.043 "seek_data": false, 00:12:29.043 "copy": true, 00:12:29.043 "nvme_iov_md": false 00:12:29.043 }, 00:12:29.043 "memory_domains": [ 00:12:29.043 { 00:12:29.043 "dma_device_id": "system", 00:12:29.043 "dma_device_type": 1 00:12:29.043 }, 00:12:29.043 { 00:12:29.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.043 "dma_device_type": 2 00:12:29.043 } 00:12:29.043 ], 00:12:29.043 "driver_specific": {} 00:12:29.043 } 00:12:29.043 ] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 [2024-12-13 08:23:41.394430] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:29.043 [2024-12-13 08:23:41.394529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:29.043 [2024-12-13 08:23:41.394600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.043 [2024-12-13 08:23:41.396702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.043 [2024-12-13 08:23:41.396795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.043 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.044 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.044 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.044 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.304 "name": "Existed_Raid", 00:12:29.304 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:29.304 "strip_size_kb": 0, 00:12:29.304 "state": "configuring", 00:12:29.304 "raid_level": "raid1", 00:12:29.304 "superblock": true, 00:12:29.304 "num_base_bdevs": 4, 00:12:29.304 "num_base_bdevs_discovered": 3, 00:12:29.304 "num_base_bdevs_operational": 4, 00:12:29.304 "base_bdevs_list": [ 00:12:29.304 { 00:12:29.304 "name": "BaseBdev1", 00:12:29.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.304 "is_configured": false, 00:12:29.304 "data_offset": 0, 00:12:29.304 "data_size": 0 00:12:29.304 }, 00:12:29.304 { 00:12:29.304 "name": "BaseBdev2", 00:12:29.304 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:29.304 "is_configured": true, 00:12:29.304 "data_offset": 2048, 00:12:29.304 "data_size": 63488 00:12:29.304 }, 00:12:29.304 { 00:12:29.304 "name": "BaseBdev3", 00:12:29.304 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:29.304 "is_configured": true, 00:12:29.304 "data_offset": 2048, 00:12:29.304 "data_size": 63488 00:12:29.304 }, 00:12:29.304 { 00:12:29.304 "name": "BaseBdev4", 00:12:29.304 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:29.304 "is_configured": true, 00:12:29.304 "data_offset": 2048, 00:12:29.304 "data_size": 63488 00:12:29.304 } 00:12:29.304 ] 00:12:29.304 }' 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.304 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.563 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:29.563 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.563 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.563 [2024-12-13 08:23:41.833719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:29.563 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.563 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.564 "name": "Existed_Raid", 00:12:29.564 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:29.564 "strip_size_kb": 0, 00:12:29.564 "state": "configuring", 00:12:29.564 "raid_level": "raid1", 00:12:29.564 "superblock": true, 00:12:29.564 "num_base_bdevs": 4, 00:12:29.564 "num_base_bdevs_discovered": 2, 00:12:29.564 "num_base_bdevs_operational": 4, 00:12:29.564 "base_bdevs_list": [ 00:12:29.564 { 00:12:29.564 "name": "BaseBdev1", 00:12:29.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.564 "is_configured": false, 00:12:29.564 "data_offset": 0, 00:12:29.564 "data_size": 0 00:12:29.564 }, 00:12:29.564 { 00:12:29.564 "name": null, 00:12:29.564 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:29.564 "is_configured": false, 00:12:29.564 "data_offset": 0, 00:12:29.564 "data_size": 63488 00:12:29.564 }, 00:12:29.564 { 00:12:29.564 "name": "BaseBdev3", 00:12:29.564 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:29.564 "is_configured": true, 00:12:29.564 "data_offset": 2048, 00:12:29.564 "data_size": 63488 00:12:29.564 }, 00:12:29.564 { 00:12:29.564 "name": "BaseBdev4", 00:12:29.564 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:29.564 "is_configured": true, 00:12:29.564 "data_offset": 2048, 00:12:29.564 "data_size": 63488 00:12:29.564 } 00:12:29.564 ] 00:12:29.564 }' 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.564 08:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.133 [2024-12-13 08:23:42.310721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.133 BaseBdev1 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.133 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.134 [ 00:12:30.134 { 00:12:30.134 "name": "BaseBdev1", 00:12:30.134 "aliases": [ 00:12:30.134 "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5" 00:12:30.134 ], 00:12:30.134 "product_name": "Malloc disk", 00:12:30.134 "block_size": 512, 00:12:30.134 "num_blocks": 65536, 00:12:30.134 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:30.134 "assigned_rate_limits": { 00:12:30.134 "rw_ios_per_sec": 0, 00:12:30.134 "rw_mbytes_per_sec": 0, 00:12:30.134 "r_mbytes_per_sec": 0, 00:12:30.134 "w_mbytes_per_sec": 0 00:12:30.134 }, 00:12:30.134 "claimed": true, 00:12:30.134 "claim_type": "exclusive_write", 00:12:30.134 "zoned": false, 00:12:30.134 "supported_io_types": { 00:12:30.134 "read": true, 00:12:30.134 "write": true, 00:12:30.134 "unmap": true, 00:12:30.134 "flush": true, 00:12:30.134 "reset": true, 00:12:30.134 "nvme_admin": false, 00:12:30.134 "nvme_io": false, 00:12:30.134 "nvme_io_md": false, 00:12:30.134 "write_zeroes": true, 00:12:30.134 "zcopy": true, 00:12:30.134 "get_zone_info": false, 00:12:30.134 "zone_management": false, 00:12:30.134 "zone_append": false, 00:12:30.134 "compare": false, 00:12:30.134 "compare_and_write": false, 00:12:30.134 "abort": true, 00:12:30.134 "seek_hole": false, 00:12:30.134 "seek_data": false, 00:12:30.134 "copy": true, 00:12:30.134 "nvme_iov_md": false 00:12:30.134 }, 00:12:30.134 "memory_domains": [ 00:12:30.134 { 00:12:30.134 "dma_device_id": "system", 00:12:30.134 "dma_device_type": 1 00:12:30.134 }, 00:12:30.134 { 00:12:30.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.134 "dma_device_type": 2 00:12:30.134 } 00:12:30.134 ], 00:12:30.134 "driver_specific": {} 00:12:30.134 } 00:12:30.134 ] 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.134 "name": "Existed_Raid", 00:12:30.134 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:30.134 "strip_size_kb": 0, 00:12:30.134 "state": "configuring", 00:12:30.134 "raid_level": "raid1", 00:12:30.134 "superblock": true, 00:12:30.134 "num_base_bdevs": 4, 00:12:30.134 "num_base_bdevs_discovered": 3, 00:12:30.134 "num_base_bdevs_operational": 4, 00:12:30.134 "base_bdevs_list": [ 00:12:30.134 { 00:12:30.134 "name": "BaseBdev1", 00:12:30.134 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:30.134 "is_configured": true, 00:12:30.134 "data_offset": 2048, 00:12:30.134 "data_size": 63488 00:12:30.134 }, 00:12:30.134 { 00:12:30.134 "name": null, 00:12:30.134 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:30.134 "is_configured": false, 00:12:30.134 "data_offset": 0, 00:12:30.134 "data_size": 63488 00:12:30.134 }, 00:12:30.134 { 00:12:30.134 "name": "BaseBdev3", 00:12:30.134 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:30.134 "is_configured": true, 00:12:30.134 "data_offset": 2048, 00:12:30.134 "data_size": 63488 00:12:30.134 }, 00:12:30.134 { 00:12:30.134 "name": "BaseBdev4", 00:12:30.134 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:30.134 "is_configured": true, 00:12:30.134 "data_offset": 2048, 00:12:30.134 "data_size": 63488 00:12:30.134 } 00:12:30.134 ] 00:12:30.134 }' 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.134 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.703 [2024-12-13 08:23:42.825938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.703 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.704 "name": "Existed_Raid", 00:12:30.704 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:30.704 "strip_size_kb": 0, 00:12:30.704 "state": "configuring", 00:12:30.704 "raid_level": "raid1", 00:12:30.704 "superblock": true, 00:12:30.704 "num_base_bdevs": 4, 00:12:30.704 "num_base_bdevs_discovered": 2, 00:12:30.704 "num_base_bdevs_operational": 4, 00:12:30.704 "base_bdevs_list": [ 00:12:30.704 { 00:12:30.704 "name": "BaseBdev1", 00:12:30.704 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:30.704 "is_configured": true, 00:12:30.704 "data_offset": 2048, 00:12:30.704 "data_size": 63488 00:12:30.704 }, 00:12:30.704 { 00:12:30.704 "name": null, 00:12:30.704 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:30.704 "is_configured": false, 00:12:30.704 "data_offset": 0, 00:12:30.704 "data_size": 63488 00:12:30.704 }, 00:12:30.704 { 00:12:30.704 "name": null, 00:12:30.704 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:30.704 "is_configured": false, 00:12:30.704 "data_offset": 0, 00:12:30.704 "data_size": 63488 00:12:30.704 }, 00:12:30.704 { 00:12:30.704 "name": "BaseBdev4", 00:12:30.704 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:30.704 "is_configured": true, 00:12:30.704 "data_offset": 2048, 00:12:30.704 "data_size": 63488 00:12:30.704 } 00:12:30.704 ] 00:12:30.704 }' 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.704 08:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.964 [2024-12-13 08:23:43.281234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.964 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.964 "name": "Existed_Raid", 00:12:30.964 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:30.964 "strip_size_kb": 0, 00:12:30.964 "state": "configuring", 00:12:30.964 "raid_level": "raid1", 00:12:30.964 "superblock": true, 00:12:30.964 "num_base_bdevs": 4, 00:12:30.964 "num_base_bdevs_discovered": 3, 00:12:30.964 "num_base_bdevs_operational": 4, 00:12:30.964 "base_bdevs_list": [ 00:12:30.964 { 00:12:30.964 "name": "BaseBdev1", 00:12:30.964 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:30.964 "is_configured": true, 00:12:30.964 "data_offset": 2048, 00:12:30.964 "data_size": 63488 00:12:30.964 }, 00:12:30.964 { 00:12:30.964 "name": null, 00:12:30.964 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:30.964 "is_configured": false, 00:12:30.964 "data_offset": 0, 00:12:30.964 "data_size": 63488 00:12:30.964 }, 00:12:30.964 { 00:12:30.964 "name": "BaseBdev3", 00:12:30.964 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:30.964 "is_configured": true, 00:12:30.964 "data_offset": 2048, 00:12:30.964 "data_size": 63488 00:12:30.964 }, 00:12:30.964 { 00:12:30.964 "name": "BaseBdev4", 00:12:30.964 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:30.964 "is_configured": true, 00:12:30.964 "data_offset": 2048, 00:12:30.964 "data_size": 63488 00:12:30.964 } 00:12:30.964 ] 00:12:30.965 }' 00:12:30.965 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.965 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.533 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.533 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:31.533 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.533 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.534 [2024-12-13 08:23:43.744406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.534 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.793 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.793 "name": "Existed_Raid", 00:12:31.793 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:31.793 "strip_size_kb": 0, 00:12:31.793 "state": "configuring", 00:12:31.793 "raid_level": "raid1", 00:12:31.793 "superblock": true, 00:12:31.793 "num_base_bdevs": 4, 00:12:31.793 "num_base_bdevs_discovered": 2, 00:12:31.793 "num_base_bdevs_operational": 4, 00:12:31.793 "base_bdevs_list": [ 00:12:31.793 { 00:12:31.793 "name": null, 00:12:31.793 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:31.793 "is_configured": false, 00:12:31.793 "data_offset": 0, 00:12:31.793 "data_size": 63488 00:12:31.793 }, 00:12:31.793 { 00:12:31.793 "name": null, 00:12:31.793 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:31.793 "is_configured": false, 00:12:31.793 "data_offset": 0, 00:12:31.793 "data_size": 63488 00:12:31.793 }, 00:12:31.793 { 00:12:31.793 "name": "BaseBdev3", 00:12:31.793 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:31.793 "is_configured": true, 00:12:31.793 "data_offset": 2048, 00:12:31.793 "data_size": 63488 00:12:31.793 }, 00:12:31.793 { 00:12:31.793 "name": "BaseBdev4", 00:12:31.793 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:31.793 "is_configured": true, 00:12:31.793 "data_offset": 2048, 00:12:31.793 "data_size": 63488 00:12:31.793 } 00:12:31.793 ] 00:12:31.793 }' 00:12:31.793 08:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.793 08:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.053 [2024-12-13 08:23:44.354540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.053 "name": "Existed_Raid", 00:12:32.053 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:32.053 "strip_size_kb": 0, 00:12:32.053 "state": "configuring", 00:12:32.053 "raid_level": "raid1", 00:12:32.053 "superblock": true, 00:12:32.053 "num_base_bdevs": 4, 00:12:32.053 "num_base_bdevs_discovered": 3, 00:12:32.053 "num_base_bdevs_operational": 4, 00:12:32.053 "base_bdevs_list": [ 00:12:32.053 { 00:12:32.053 "name": null, 00:12:32.053 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:32.053 "is_configured": false, 00:12:32.053 "data_offset": 0, 00:12:32.053 "data_size": 63488 00:12:32.053 }, 00:12:32.053 { 00:12:32.053 "name": "BaseBdev2", 00:12:32.053 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:32.053 "is_configured": true, 00:12:32.053 "data_offset": 2048, 00:12:32.053 "data_size": 63488 00:12:32.053 }, 00:12:32.053 { 00:12:32.053 "name": "BaseBdev3", 00:12:32.053 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:32.053 "is_configured": true, 00:12:32.053 "data_offset": 2048, 00:12:32.053 "data_size": 63488 00:12:32.053 }, 00:12:32.053 { 00:12:32.053 "name": "BaseBdev4", 00:12:32.053 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:32.053 "is_configured": true, 00:12:32.053 "data_offset": 2048, 00:12:32.053 "data_size": 63488 00:12:32.053 } 00:12:32.053 ] 00:12:32.053 }' 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.053 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.621 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.622 [2024-12-13 08:23:44.871637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:32.622 [2024-12-13 08:23:44.871972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:32.622 [2024-12-13 08:23:44.872035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.622 [2024-12-13 08:23:44.872352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:32.622 [2024-12-13 08:23:44.872567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:32.622 [2024-12-13 08:23:44.872615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:32.622 NewBaseBdev 00:12:32.622 [2024-12-13 08:23:44.872798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.622 [ 00:12:32.622 { 00:12:32.622 "name": "NewBaseBdev", 00:12:32.622 "aliases": [ 00:12:32.622 "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5" 00:12:32.622 ], 00:12:32.622 "product_name": "Malloc disk", 00:12:32.622 "block_size": 512, 00:12:32.622 "num_blocks": 65536, 00:12:32.622 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:32.622 "assigned_rate_limits": { 00:12:32.622 "rw_ios_per_sec": 0, 00:12:32.622 "rw_mbytes_per_sec": 0, 00:12:32.622 "r_mbytes_per_sec": 0, 00:12:32.622 "w_mbytes_per_sec": 0 00:12:32.622 }, 00:12:32.622 "claimed": true, 00:12:32.622 "claim_type": "exclusive_write", 00:12:32.622 "zoned": false, 00:12:32.622 "supported_io_types": { 00:12:32.622 "read": true, 00:12:32.622 "write": true, 00:12:32.622 "unmap": true, 00:12:32.622 "flush": true, 00:12:32.622 "reset": true, 00:12:32.622 "nvme_admin": false, 00:12:32.622 "nvme_io": false, 00:12:32.622 "nvme_io_md": false, 00:12:32.622 "write_zeroes": true, 00:12:32.622 "zcopy": true, 00:12:32.622 "get_zone_info": false, 00:12:32.622 "zone_management": false, 00:12:32.622 "zone_append": false, 00:12:32.622 "compare": false, 00:12:32.622 "compare_and_write": false, 00:12:32.622 "abort": true, 00:12:32.622 "seek_hole": false, 00:12:32.622 "seek_data": false, 00:12:32.622 "copy": true, 00:12:32.622 "nvme_iov_md": false 00:12:32.622 }, 00:12:32.622 "memory_domains": [ 00:12:32.622 { 00:12:32.622 "dma_device_id": "system", 00:12:32.622 "dma_device_type": 1 00:12:32.622 }, 00:12:32.622 { 00:12:32.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.622 "dma_device_type": 2 00:12:32.622 } 00:12:32.622 ], 00:12:32.622 "driver_specific": {} 00:12:32.622 } 00:12:32.622 ] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.622 "name": "Existed_Raid", 00:12:32.622 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:32.622 "strip_size_kb": 0, 00:12:32.622 "state": "online", 00:12:32.622 "raid_level": "raid1", 00:12:32.622 "superblock": true, 00:12:32.622 "num_base_bdevs": 4, 00:12:32.622 "num_base_bdevs_discovered": 4, 00:12:32.622 "num_base_bdevs_operational": 4, 00:12:32.622 "base_bdevs_list": [ 00:12:32.622 { 00:12:32.622 "name": "NewBaseBdev", 00:12:32.622 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:32.622 "is_configured": true, 00:12:32.622 "data_offset": 2048, 00:12:32.622 "data_size": 63488 00:12:32.622 }, 00:12:32.622 { 00:12:32.622 "name": "BaseBdev2", 00:12:32.622 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:32.622 "is_configured": true, 00:12:32.622 "data_offset": 2048, 00:12:32.622 "data_size": 63488 00:12:32.622 }, 00:12:32.622 { 00:12:32.622 "name": "BaseBdev3", 00:12:32.622 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:32.622 "is_configured": true, 00:12:32.622 "data_offset": 2048, 00:12:32.622 "data_size": 63488 00:12:32.622 }, 00:12:32.622 { 00:12:32.622 "name": "BaseBdev4", 00:12:32.622 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:32.622 "is_configured": true, 00:12:32.622 "data_offset": 2048, 00:12:32.622 "data_size": 63488 00:12:32.622 } 00:12:32.622 ] 00:12:32.622 }' 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.622 08:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:33.196 [2024-12-13 08:23:45.359247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:33.196 "name": "Existed_Raid", 00:12:33.196 "aliases": [ 00:12:33.196 "ff48c8e4-b776-4d54-956f-eabf2521d9af" 00:12:33.196 ], 00:12:33.196 "product_name": "Raid Volume", 00:12:33.196 "block_size": 512, 00:12:33.196 "num_blocks": 63488, 00:12:33.196 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:33.196 "assigned_rate_limits": { 00:12:33.196 "rw_ios_per_sec": 0, 00:12:33.196 "rw_mbytes_per_sec": 0, 00:12:33.196 "r_mbytes_per_sec": 0, 00:12:33.196 "w_mbytes_per_sec": 0 00:12:33.196 }, 00:12:33.196 "claimed": false, 00:12:33.196 "zoned": false, 00:12:33.196 "supported_io_types": { 00:12:33.196 "read": true, 00:12:33.196 "write": true, 00:12:33.196 "unmap": false, 00:12:33.196 "flush": false, 00:12:33.196 "reset": true, 00:12:33.196 "nvme_admin": false, 00:12:33.196 "nvme_io": false, 00:12:33.196 "nvme_io_md": false, 00:12:33.196 "write_zeroes": true, 00:12:33.196 "zcopy": false, 00:12:33.196 "get_zone_info": false, 00:12:33.196 "zone_management": false, 00:12:33.196 "zone_append": false, 00:12:33.196 "compare": false, 00:12:33.196 "compare_and_write": false, 00:12:33.196 "abort": false, 00:12:33.196 "seek_hole": false, 00:12:33.196 "seek_data": false, 00:12:33.196 "copy": false, 00:12:33.196 "nvme_iov_md": false 00:12:33.196 }, 00:12:33.196 "memory_domains": [ 00:12:33.196 { 00:12:33.196 "dma_device_id": "system", 00:12:33.196 "dma_device_type": 1 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.196 "dma_device_type": 2 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "dma_device_id": "system", 00:12:33.196 "dma_device_type": 1 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.196 "dma_device_type": 2 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "dma_device_id": "system", 00:12:33.196 "dma_device_type": 1 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.196 "dma_device_type": 2 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "dma_device_id": "system", 00:12:33.196 "dma_device_type": 1 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.196 "dma_device_type": 2 00:12:33.196 } 00:12:33.196 ], 00:12:33.196 "driver_specific": { 00:12:33.196 "raid": { 00:12:33.196 "uuid": "ff48c8e4-b776-4d54-956f-eabf2521d9af", 00:12:33.196 "strip_size_kb": 0, 00:12:33.196 "state": "online", 00:12:33.196 "raid_level": "raid1", 00:12:33.196 "superblock": true, 00:12:33.196 "num_base_bdevs": 4, 00:12:33.196 "num_base_bdevs_discovered": 4, 00:12:33.196 "num_base_bdevs_operational": 4, 00:12:33.196 "base_bdevs_list": [ 00:12:33.196 { 00:12:33.196 "name": "NewBaseBdev", 00:12:33.196 "uuid": "d62ba4f8-6587-4fc9-baae-08ee6b0f7dc5", 00:12:33.196 "is_configured": true, 00:12:33.196 "data_offset": 2048, 00:12:33.196 "data_size": 63488 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "name": "BaseBdev2", 00:12:33.196 "uuid": "4415522b-76d4-438f-88b0-8d7d16f8c606", 00:12:33.196 "is_configured": true, 00:12:33.196 "data_offset": 2048, 00:12:33.196 "data_size": 63488 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "name": "BaseBdev3", 00:12:33.196 "uuid": "1f65765e-b100-46d0-8fe1-1677975d6c98", 00:12:33.196 "is_configured": true, 00:12:33.196 "data_offset": 2048, 00:12:33.196 "data_size": 63488 00:12:33.196 }, 00:12:33.196 { 00:12:33.196 "name": "BaseBdev4", 00:12:33.196 "uuid": "195ee575-30f9-4e38-9706-c3dc4ecb42a2", 00:12:33.196 "is_configured": true, 00:12:33.196 "data_offset": 2048, 00:12:33.196 "data_size": 63488 00:12:33.196 } 00:12:33.196 ] 00:12:33.196 } 00:12:33.196 } 00:12:33.196 }' 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:33.196 BaseBdev2 00:12:33.196 BaseBdev3 00:12:33.196 BaseBdev4' 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:33.196 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.197 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.456 [2024-12-13 08:23:45.666356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.456 [2024-12-13 08:23:45.666428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.456 [2024-12-13 08:23:45.666534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.456 [2024-12-13 08:23:45.666843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.456 [2024-12-13 08:23:45.666901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74048 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74048 ']' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74048 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74048 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74048' 00:12:33.456 killing process with pid 74048 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74048 00:12:33.456 [2024-12-13 08:23:45.711093] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.456 08:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74048 00:12:34.025 [2024-12-13 08:23:46.124818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.962 08:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:34.962 00:12:34.962 real 0m11.606s 00:12:34.962 user 0m18.403s 00:12:34.962 sys 0m2.001s 00:12:34.962 ************************************ 00:12:34.962 END TEST raid_state_function_test_sb 00:12:34.962 ************************************ 00:12:34.962 08:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.962 08:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.221 08:23:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:35.221 08:23:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:35.221 08:23:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.221 08:23:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.221 ************************************ 00:12:35.221 START TEST raid_superblock_test 00:12:35.221 ************************************ 00:12:35.221 08:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:35.221 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:35.221 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:35.221 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:35.221 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:35.221 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:35.221 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74713 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74713 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74713 ']' 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.222 08:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.222 [2024-12-13 08:23:47.464538] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:35.222 [2024-12-13 08:23:47.464760] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74713 ] 00:12:35.481 [2024-12-13 08:23:47.628399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.481 [2024-12-13 08:23:47.751767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.740 [2024-12-13 08:23:47.961893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.740 [2024-12-13 08:23:47.962017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:35.999 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.000 malloc1 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.000 [2024-12-13 08:23:48.351670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:36.000 [2024-12-13 08:23:48.351788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.000 [2024-12-13 08:23:48.351833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:36.000 [2024-12-13 08:23:48.351884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.000 [2024-12-13 08:23:48.354029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.000 [2024-12-13 08:23:48.354097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:36.000 pt1 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.000 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 malloc2 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 [2024-12-13 08:23:48.406076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:36.260 [2024-12-13 08:23:48.406198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.260 [2024-12-13 08:23:48.406240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:36.260 [2024-12-13 08:23:48.406269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.260 [2024-12-13 08:23:48.408418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.260 [2024-12-13 08:23:48.408490] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:36.260 pt2 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 malloc3 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 [2024-12-13 08:23:48.479486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:36.260 [2024-12-13 08:23:48.479600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.260 [2024-12-13 08:23:48.479645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:36.260 [2024-12-13 08:23:48.479679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.260 [2024-12-13 08:23:48.482094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.260 [2024-12-13 08:23:48.482146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:36.260 pt3 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 malloc4 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 [2024-12-13 08:23:48.535465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:36.260 [2024-12-13 08:23:48.535574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.260 [2024-12-13 08:23:48.535617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:36.260 [2024-12-13 08:23:48.535646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.260 [2024-12-13 08:23:48.537832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.260 [2024-12-13 08:23:48.537902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:36.260 pt4 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 [2024-12-13 08:23:48.547457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:36.260 [2024-12-13 08:23:48.549301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:36.260 [2024-12-13 08:23:48.549401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:36.260 [2024-12-13 08:23:48.549482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:36.260 [2024-12-13 08:23:48.549706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:36.260 [2024-12-13 08:23:48.549758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.260 [2024-12-13 08:23:48.550039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:36.260 [2024-12-13 08:23:48.550288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:36.260 [2024-12-13 08:23:48.550343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:36.260 [2024-12-13 08:23:48.550565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.260 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.261 "name": "raid_bdev1", 00:12:36.261 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:36.261 "strip_size_kb": 0, 00:12:36.261 "state": "online", 00:12:36.261 "raid_level": "raid1", 00:12:36.261 "superblock": true, 00:12:36.261 "num_base_bdevs": 4, 00:12:36.261 "num_base_bdevs_discovered": 4, 00:12:36.261 "num_base_bdevs_operational": 4, 00:12:36.261 "base_bdevs_list": [ 00:12:36.261 { 00:12:36.261 "name": "pt1", 00:12:36.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.261 "is_configured": true, 00:12:36.261 "data_offset": 2048, 00:12:36.261 "data_size": 63488 00:12:36.261 }, 00:12:36.261 { 00:12:36.261 "name": "pt2", 00:12:36.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.261 "is_configured": true, 00:12:36.261 "data_offset": 2048, 00:12:36.261 "data_size": 63488 00:12:36.261 }, 00:12:36.261 { 00:12:36.261 "name": "pt3", 00:12:36.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.261 "is_configured": true, 00:12:36.261 "data_offset": 2048, 00:12:36.261 "data_size": 63488 00:12:36.261 }, 00:12:36.261 { 00:12:36.261 "name": "pt4", 00:12:36.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:36.261 "is_configured": true, 00:12:36.261 "data_offset": 2048, 00:12:36.261 "data_size": 63488 00:12:36.261 } 00:12:36.261 ] 00:12:36.261 }' 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.261 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.829 08:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.829 [2024-12-13 08:23:48.991151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.829 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.829 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:36.829 "name": "raid_bdev1", 00:12:36.829 "aliases": [ 00:12:36.829 "c629cd2f-75e6-4a31-b50f-ea207b58420e" 00:12:36.829 ], 00:12:36.829 "product_name": "Raid Volume", 00:12:36.829 "block_size": 512, 00:12:36.829 "num_blocks": 63488, 00:12:36.829 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:36.829 "assigned_rate_limits": { 00:12:36.829 "rw_ios_per_sec": 0, 00:12:36.829 "rw_mbytes_per_sec": 0, 00:12:36.830 "r_mbytes_per_sec": 0, 00:12:36.830 "w_mbytes_per_sec": 0 00:12:36.830 }, 00:12:36.830 "claimed": false, 00:12:36.830 "zoned": false, 00:12:36.830 "supported_io_types": { 00:12:36.830 "read": true, 00:12:36.830 "write": true, 00:12:36.830 "unmap": false, 00:12:36.830 "flush": false, 00:12:36.830 "reset": true, 00:12:36.830 "nvme_admin": false, 00:12:36.830 "nvme_io": false, 00:12:36.830 "nvme_io_md": false, 00:12:36.830 "write_zeroes": true, 00:12:36.830 "zcopy": false, 00:12:36.830 "get_zone_info": false, 00:12:36.830 "zone_management": false, 00:12:36.830 "zone_append": false, 00:12:36.830 "compare": false, 00:12:36.830 "compare_and_write": false, 00:12:36.830 "abort": false, 00:12:36.830 "seek_hole": false, 00:12:36.830 "seek_data": false, 00:12:36.830 "copy": false, 00:12:36.830 "nvme_iov_md": false 00:12:36.830 }, 00:12:36.830 "memory_domains": [ 00:12:36.830 { 00:12:36.830 "dma_device_id": "system", 00:12:36.830 "dma_device_type": 1 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.830 "dma_device_type": 2 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "dma_device_id": "system", 00:12:36.830 "dma_device_type": 1 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.830 "dma_device_type": 2 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "dma_device_id": "system", 00:12:36.830 "dma_device_type": 1 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.830 "dma_device_type": 2 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "dma_device_id": "system", 00:12:36.830 "dma_device_type": 1 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.830 "dma_device_type": 2 00:12:36.830 } 00:12:36.830 ], 00:12:36.830 "driver_specific": { 00:12:36.830 "raid": { 00:12:36.830 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:36.830 "strip_size_kb": 0, 00:12:36.830 "state": "online", 00:12:36.830 "raid_level": "raid1", 00:12:36.830 "superblock": true, 00:12:36.830 "num_base_bdevs": 4, 00:12:36.830 "num_base_bdevs_discovered": 4, 00:12:36.830 "num_base_bdevs_operational": 4, 00:12:36.830 "base_bdevs_list": [ 00:12:36.830 { 00:12:36.830 "name": "pt1", 00:12:36.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.830 "is_configured": true, 00:12:36.830 "data_offset": 2048, 00:12:36.830 "data_size": 63488 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "name": "pt2", 00:12:36.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.830 "is_configured": true, 00:12:36.830 "data_offset": 2048, 00:12:36.830 "data_size": 63488 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "name": "pt3", 00:12:36.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.830 "is_configured": true, 00:12:36.830 "data_offset": 2048, 00:12:36.830 "data_size": 63488 00:12:36.830 }, 00:12:36.830 { 00:12:36.830 "name": "pt4", 00:12:36.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:36.830 "is_configured": true, 00:12:36.830 "data_offset": 2048, 00:12:36.830 "data_size": 63488 00:12:36.830 } 00:12:36.830 ] 00:12:36.830 } 00:12:36.830 } 00:12:36.830 }' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:36.830 pt2 00:12:36.830 pt3 00:12:36.830 pt4' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.830 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 [2024-12-13 08:23:49.326477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c629cd2f-75e6-4a31-b50f-ea207b58420e 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c629cd2f-75e6-4a31-b50f-ea207b58420e ']' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 [2024-12-13 08:23:49.370088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.090 [2024-12-13 08:23:49.370166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.090 [2024-12-13 08:23:49.370290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.090 [2024-12-13 08:23:49.370401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.090 [2024-12-13 08:23:49.370479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.090 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.350 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 [2024-12-13 08:23:49.545818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:37.350 [2024-12-13 08:23:49.547966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:37.350 [2024-12-13 08:23:49.548077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:37.350 [2024-12-13 08:23:49.548169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:37.350 [2024-12-13 08:23:49.548258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:37.350 [2024-12-13 08:23:49.548362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:37.350 [2024-12-13 08:23:49.548444] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:37.351 [2024-12-13 08:23:49.548517] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:37.351 [2024-12-13 08:23:49.548579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.351 [2024-12-13 08:23:49.548620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:37.351 request: 00:12:37.351 { 00:12:37.351 "name": "raid_bdev1", 00:12:37.351 "raid_level": "raid1", 00:12:37.351 "base_bdevs": [ 00:12:37.351 "malloc1", 00:12:37.351 "malloc2", 00:12:37.351 "malloc3", 00:12:37.351 "malloc4" 00:12:37.351 ], 00:12:37.351 "superblock": false, 00:12:37.351 "method": "bdev_raid_create", 00:12:37.351 "req_id": 1 00:12:37.351 } 00:12:37.351 Got JSON-RPC error response 00:12:37.351 response: 00:12:37.351 { 00:12:37.351 "code": -17, 00:12:37.351 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:37.351 } 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.351 [2024-12-13 08:23:49.613688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:37.351 [2024-12-13 08:23:49.613793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.351 [2024-12-13 08:23:49.613829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:37.351 [2024-12-13 08:23:49.613859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.351 [2024-12-13 08:23:49.616169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.351 [2024-12-13 08:23:49.616246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:37.351 [2024-12-13 08:23:49.616382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:37.351 [2024-12-13 08:23:49.616512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:37.351 pt1 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.351 "name": "raid_bdev1", 00:12:37.351 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:37.351 "strip_size_kb": 0, 00:12:37.351 "state": "configuring", 00:12:37.351 "raid_level": "raid1", 00:12:37.351 "superblock": true, 00:12:37.351 "num_base_bdevs": 4, 00:12:37.351 "num_base_bdevs_discovered": 1, 00:12:37.351 "num_base_bdevs_operational": 4, 00:12:37.351 "base_bdevs_list": [ 00:12:37.351 { 00:12:37.351 "name": "pt1", 00:12:37.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.351 "is_configured": true, 00:12:37.351 "data_offset": 2048, 00:12:37.351 "data_size": 63488 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "name": null, 00:12:37.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.351 "is_configured": false, 00:12:37.351 "data_offset": 2048, 00:12:37.351 "data_size": 63488 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "name": null, 00:12:37.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.351 "is_configured": false, 00:12:37.351 "data_offset": 2048, 00:12:37.351 "data_size": 63488 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "name": null, 00:12:37.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.351 "is_configured": false, 00:12:37.351 "data_offset": 2048, 00:12:37.351 "data_size": 63488 00:12:37.351 } 00:12:37.351 ] 00:12:37.351 }' 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.351 08:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.917 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.918 [2024-12-13 08:23:50.032995] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:37.918 [2024-12-13 08:23:50.033141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.918 [2024-12-13 08:23:50.033205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:37.918 [2024-12-13 08:23:50.033221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.918 [2024-12-13 08:23:50.033727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.918 [2024-12-13 08:23:50.033749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:37.918 [2024-12-13 08:23:50.033840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:37.918 [2024-12-13 08:23:50.033866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:37.918 pt2 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.918 [2024-12-13 08:23:50.044985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.918 "name": "raid_bdev1", 00:12:37.918 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:37.918 "strip_size_kb": 0, 00:12:37.918 "state": "configuring", 00:12:37.918 "raid_level": "raid1", 00:12:37.918 "superblock": true, 00:12:37.918 "num_base_bdevs": 4, 00:12:37.918 "num_base_bdevs_discovered": 1, 00:12:37.918 "num_base_bdevs_operational": 4, 00:12:37.918 "base_bdevs_list": [ 00:12:37.918 { 00:12:37.918 "name": "pt1", 00:12:37.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.918 "is_configured": true, 00:12:37.918 "data_offset": 2048, 00:12:37.918 "data_size": 63488 00:12:37.918 }, 00:12:37.918 { 00:12:37.918 "name": null, 00:12:37.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.918 "is_configured": false, 00:12:37.918 "data_offset": 0, 00:12:37.918 "data_size": 63488 00:12:37.918 }, 00:12:37.918 { 00:12:37.918 "name": null, 00:12:37.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.918 "is_configured": false, 00:12:37.918 "data_offset": 2048, 00:12:37.918 "data_size": 63488 00:12:37.918 }, 00:12:37.918 { 00:12:37.918 "name": null, 00:12:37.918 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.918 "is_configured": false, 00:12:37.918 "data_offset": 2048, 00:12:37.918 "data_size": 63488 00:12:37.918 } 00:12:37.918 ] 00:12:37.918 }' 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.918 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.176 [2024-12-13 08:23:50.528150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:38.176 [2024-12-13 08:23:50.528290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.176 [2024-12-13 08:23:50.528333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:38.176 [2024-12-13 08:23:50.528365] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.176 [2024-12-13 08:23:50.528863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.176 [2024-12-13 08:23:50.528926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:38.176 [2024-12-13 08:23:50.529052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:38.176 [2024-12-13 08:23:50.529117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:38.176 pt2 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.176 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.176 [2024-12-13 08:23:50.540084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:38.176 [2024-12-13 08:23:50.540187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.176 [2024-12-13 08:23:50.540225] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:38.176 [2024-12-13 08:23:50.540256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.452 [2024-12-13 08:23:50.540713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.452 [2024-12-13 08:23:50.540776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:38.452 [2024-12-13 08:23:50.540883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:38.452 [2024-12-13 08:23:50.540938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:38.452 pt3 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.452 [2024-12-13 08:23:50.552042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:38.452 [2024-12-13 08:23:50.552141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.452 [2024-12-13 08:23:50.552163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:38.452 [2024-12-13 08:23:50.552172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.452 [2024-12-13 08:23:50.552611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.452 [2024-12-13 08:23:50.552629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:38.452 [2024-12-13 08:23:50.552701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:38.452 [2024-12-13 08:23:50.552726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:38.452 [2024-12-13 08:23:50.552882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:38.452 [2024-12-13 08:23:50.552892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.452 [2024-12-13 08:23:50.553167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:38.452 [2024-12-13 08:23:50.553348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:38.452 [2024-12-13 08:23:50.553362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:38.452 pt4 00:12:38.452 [2024-12-13 08:23:50.553507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.452 "name": "raid_bdev1", 00:12:38.452 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:38.452 "strip_size_kb": 0, 00:12:38.452 "state": "online", 00:12:38.452 "raid_level": "raid1", 00:12:38.452 "superblock": true, 00:12:38.452 "num_base_bdevs": 4, 00:12:38.452 "num_base_bdevs_discovered": 4, 00:12:38.452 "num_base_bdevs_operational": 4, 00:12:38.452 "base_bdevs_list": [ 00:12:38.452 { 00:12:38.452 "name": "pt1", 00:12:38.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.452 "is_configured": true, 00:12:38.452 "data_offset": 2048, 00:12:38.452 "data_size": 63488 00:12:38.452 }, 00:12:38.452 { 00:12:38.452 "name": "pt2", 00:12:38.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.452 "is_configured": true, 00:12:38.452 "data_offset": 2048, 00:12:38.452 "data_size": 63488 00:12:38.452 }, 00:12:38.452 { 00:12:38.452 "name": "pt3", 00:12:38.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.452 "is_configured": true, 00:12:38.452 "data_offset": 2048, 00:12:38.452 "data_size": 63488 00:12:38.452 }, 00:12:38.452 { 00:12:38.452 "name": "pt4", 00:12:38.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.452 "is_configured": true, 00:12:38.452 "data_offset": 2048, 00:12:38.452 "data_size": 63488 00:12:38.452 } 00:12:38.452 ] 00:12:38.452 }' 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.452 08:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.712 [2024-12-13 08:23:51.043621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.712 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.712 "name": "raid_bdev1", 00:12:38.712 "aliases": [ 00:12:38.712 "c629cd2f-75e6-4a31-b50f-ea207b58420e" 00:12:38.712 ], 00:12:38.712 "product_name": "Raid Volume", 00:12:38.712 "block_size": 512, 00:12:38.712 "num_blocks": 63488, 00:12:38.712 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:38.712 "assigned_rate_limits": { 00:12:38.712 "rw_ios_per_sec": 0, 00:12:38.713 "rw_mbytes_per_sec": 0, 00:12:38.713 "r_mbytes_per_sec": 0, 00:12:38.713 "w_mbytes_per_sec": 0 00:12:38.713 }, 00:12:38.713 "claimed": false, 00:12:38.713 "zoned": false, 00:12:38.713 "supported_io_types": { 00:12:38.713 "read": true, 00:12:38.713 "write": true, 00:12:38.713 "unmap": false, 00:12:38.713 "flush": false, 00:12:38.713 "reset": true, 00:12:38.713 "nvme_admin": false, 00:12:38.713 "nvme_io": false, 00:12:38.713 "nvme_io_md": false, 00:12:38.713 "write_zeroes": true, 00:12:38.713 "zcopy": false, 00:12:38.713 "get_zone_info": false, 00:12:38.713 "zone_management": false, 00:12:38.713 "zone_append": false, 00:12:38.713 "compare": false, 00:12:38.713 "compare_and_write": false, 00:12:38.713 "abort": false, 00:12:38.713 "seek_hole": false, 00:12:38.713 "seek_data": false, 00:12:38.713 "copy": false, 00:12:38.713 "nvme_iov_md": false 00:12:38.713 }, 00:12:38.713 "memory_domains": [ 00:12:38.713 { 00:12:38.713 "dma_device_id": "system", 00:12:38.713 "dma_device_type": 1 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.713 "dma_device_type": 2 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "dma_device_id": "system", 00:12:38.713 "dma_device_type": 1 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.713 "dma_device_type": 2 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "dma_device_id": "system", 00:12:38.713 "dma_device_type": 1 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.713 "dma_device_type": 2 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "dma_device_id": "system", 00:12:38.713 "dma_device_type": 1 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.713 "dma_device_type": 2 00:12:38.713 } 00:12:38.713 ], 00:12:38.713 "driver_specific": { 00:12:38.713 "raid": { 00:12:38.713 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:38.713 "strip_size_kb": 0, 00:12:38.713 "state": "online", 00:12:38.713 "raid_level": "raid1", 00:12:38.713 "superblock": true, 00:12:38.713 "num_base_bdevs": 4, 00:12:38.713 "num_base_bdevs_discovered": 4, 00:12:38.713 "num_base_bdevs_operational": 4, 00:12:38.713 "base_bdevs_list": [ 00:12:38.713 { 00:12:38.713 "name": "pt1", 00:12:38.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.713 "is_configured": true, 00:12:38.713 "data_offset": 2048, 00:12:38.713 "data_size": 63488 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "name": "pt2", 00:12:38.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.713 "is_configured": true, 00:12:38.713 "data_offset": 2048, 00:12:38.713 "data_size": 63488 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "name": "pt3", 00:12:38.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.713 "is_configured": true, 00:12:38.713 "data_offset": 2048, 00:12:38.713 "data_size": 63488 00:12:38.713 }, 00:12:38.713 { 00:12:38.713 "name": "pt4", 00:12:38.713 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.713 "is_configured": true, 00:12:38.713 "data_offset": 2048, 00:12:38.713 "data_size": 63488 00:12:38.713 } 00:12:38.713 ] 00:12:38.713 } 00:12:38.713 } 00:12:38.713 }' 00:12:38.713 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:38.972 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:38.972 pt2 00:12:38.972 pt3 00:12:38.972 pt4' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.973 [2024-12-13 08:23:51.311074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.973 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c629cd2f-75e6-4a31-b50f-ea207b58420e '!=' c629cd2f-75e6-4a31-b50f-ea207b58420e ']' 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.232 [2024-12-13 08:23:51.354763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.232 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.232 "name": "raid_bdev1", 00:12:39.232 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:39.232 "strip_size_kb": 0, 00:12:39.232 "state": "online", 00:12:39.232 "raid_level": "raid1", 00:12:39.232 "superblock": true, 00:12:39.232 "num_base_bdevs": 4, 00:12:39.232 "num_base_bdevs_discovered": 3, 00:12:39.233 "num_base_bdevs_operational": 3, 00:12:39.233 "base_bdevs_list": [ 00:12:39.233 { 00:12:39.233 "name": null, 00:12:39.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.233 "is_configured": false, 00:12:39.233 "data_offset": 0, 00:12:39.233 "data_size": 63488 00:12:39.233 }, 00:12:39.233 { 00:12:39.233 "name": "pt2", 00:12:39.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.233 "is_configured": true, 00:12:39.233 "data_offset": 2048, 00:12:39.233 "data_size": 63488 00:12:39.233 }, 00:12:39.233 { 00:12:39.233 "name": "pt3", 00:12:39.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.233 "is_configured": true, 00:12:39.233 "data_offset": 2048, 00:12:39.233 "data_size": 63488 00:12:39.233 }, 00:12:39.233 { 00:12:39.233 "name": "pt4", 00:12:39.233 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.233 "is_configured": true, 00:12:39.233 "data_offset": 2048, 00:12:39.233 "data_size": 63488 00:12:39.233 } 00:12:39.233 ] 00:12:39.233 }' 00:12:39.233 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.233 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.492 [2024-12-13 08:23:51.778004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.492 [2024-12-13 08:23:51.778081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.492 [2024-12-13 08:23:51.778190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.492 [2024-12-13 08:23:51.778310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.492 [2024-12-13 08:23:51.778361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.492 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:39.493 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:39.493 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:39.493 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.493 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.752 [2024-12-13 08:23:51.861838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:39.752 [2024-12-13 08:23:51.861932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.752 [2024-12-13 08:23:51.861969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:39.752 [2024-12-13 08:23:51.861997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.752 [2024-12-13 08:23:51.864325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.752 [2024-12-13 08:23:51.864395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:39.752 [2024-12-13 08:23:51.864516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:39.752 [2024-12-13 08:23:51.864588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:39.752 pt2 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.752 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.753 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.753 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.753 "name": "raid_bdev1", 00:12:39.753 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:39.753 "strip_size_kb": 0, 00:12:39.753 "state": "configuring", 00:12:39.753 "raid_level": "raid1", 00:12:39.753 "superblock": true, 00:12:39.753 "num_base_bdevs": 4, 00:12:39.753 "num_base_bdevs_discovered": 1, 00:12:39.753 "num_base_bdevs_operational": 3, 00:12:39.753 "base_bdevs_list": [ 00:12:39.753 { 00:12:39.753 "name": null, 00:12:39.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.753 "is_configured": false, 00:12:39.753 "data_offset": 2048, 00:12:39.753 "data_size": 63488 00:12:39.753 }, 00:12:39.753 { 00:12:39.753 "name": "pt2", 00:12:39.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.753 "is_configured": true, 00:12:39.753 "data_offset": 2048, 00:12:39.753 "data_size": 63488 00:12:39.753 }, 00:12:39.753 { 00:12:39.753 "name": null, 00:12:39.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.753 "is_configured": false, 00:12:39.753 "data_offset": 2048, 00:12:39.753 "data_size": 63488 00:12:39.753 }, 00:12:39.753 { 00:12:39.753 "name": null, 00:12:39.753 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.753 "is_configured": false, 00:12:39.753 "data_offset": 2048, 00:12:39.753 "data_size": 63488 00:12:39.753 } 00:12:39.753 ] 00:12:39.753 }' 00:12:39.753 08:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.753 08:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.012 [2024-12-13 08:23:52.317162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.012 [2024-12-13 08:23:52.317289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.012 [2024-12-13 08:23:52.317344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:40.012 [2024-12-13 08:23:52.317380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.012 [2024-12-13 08:23:52.317930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.012 [2024-12-13 08:23:52.317999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.012 [2024-12-13 08:23:52.318131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:40.012 [2024-12-13 08:23:52.318160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.012 pt3 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.012 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.012 "name": "raid_bdev1", 00:12:40.012 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:40.012 "strip_size_kb": 0, 00:12:40.012 "state": "configuring", 00:12:40.012 "raid_level": "raid1", 00:12:40.012 "superblock": true, 00:12:40.012 "num_base_bdevs": 4, 00:12:40.012 "num_base_bdevs_discovered": 2, 00:12:40.012 "num_base_bdevs_operational": 3, 00:12:40.012 "base_bdevs_list": [ 00:12:40.012 { 00:12:40.012 "name": null, 00:12:40.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.012 "is_configured": false, 00:12:40.012 "data_offset": 2048, 00:12:40.012 "data_size": 63488 00:12:40.012 }, 00:12:40.012 { 00:12:40.012 "name": "pt2", 00:12:40.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.013 "is_configured": true, 00:12:40.013 "data_offset": 2048, 00:12:40.013 "data_size": 63488 00:12:40.013 }, 00:12:40.013 { 00:12:40.013 "name": "pt3", 00:12:40.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.013 "is_configured": true, 00:12:40.013 "data_offset": 2048, 00:12:40.013 "data_size": 63488 00:12:40.013 }, 00:12:40.013 { 00:12:40.013 "name": null, 00:12:40.013 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.013 "is_configured": false, 00:12:40.013 "data_offset": 2048, 00:12:40.013 "data_size": 63488 00:12:40.013 } 00:12:40.013 ] 00:12:40.013 }' 00:12:40.013 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.013 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.580 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:40.580 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:40.580 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:40.580 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:40.580 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.580 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.580 [2024-12-13 08:23:52.732397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:40.580 [2024-12-13 08:23:52.732510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.580 [2024-12-13 08:23:52.732561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:40.580 [2024-12-13 08:23:52.732589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.580 [2024-12-13 08:23:52.733045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.580 [2024-12-13 08:23:52.733124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:40.581 [2024-12-13 08:23:52.733243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:40.581 [2024-12-13 08:23:52.733296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:40.581 [2024-12-13 08:23:52.733456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:40.581 [2024-12-13 08:23:52.733493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.581 [2024-12-13 08:23:52.733750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:40.581 [2024-12-13 08:23:52.733938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:40.581 [2024-12-13 08:23:52.733984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:40.581 [2024-12-13 08:23:52.734183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.581 pt4 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.581 "name": "raid_bdev1", 00:12:40.581 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:40.581 "strip_size_kb": 0, 00:12:40.581 "state": "online", 00:12:40.581 "raid_level": "raid1", 00:12:40.581 "superblock": true, 00:12:40.581 "num_base_bdevs": 4, 00:12:40.581 "num_base_bdevs_discovered": 3, 00:12:40.581 "num_base_bdevs_operational": 3, 00:12:40.581 "base_bdevs_list": [ 00:12:40.581 { 00:12:40.581 "name": null, 00:12:40.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.581 "is_configured": false, 00:12:40.581 "data_offset": 2048, 00:12:40.581 "data_size": 63488 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "name": "pt2", 00:12:40.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.581 "is_configured": true, 00:12:40.581 "data_offset": 2048, 00:12:40.581 "data_size": 63488 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "name": "pt3", 00:12:40.581 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.581 "is_configured": true, 00:12:40.581 "data_offset": 2048, 00:12:40.581 "data_size": 63488 00:12:40.581 }, 00:12:40.581 { 00:12:40.581 "name": "pt4", 00:12:40.581 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.581 "is_configured": true, 00:12:40.581 "data_offset": 2048, 00:12:40.581 "data_size": 63488 00:12:40.581 } 00:12:40.581 ] 00:12:40.581 }' 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.581 08:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.840 [2024-12-13 08:23:53.187623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.840 [2024-12-13 08:23:53.187695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.840 [2024-12-13 08:23:53.187800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.840 [2024-12-13 08:23:53.187918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.840 [2024-12-13 08:23:53.187972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:40.840 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.099 [2024-12-13 08:23:53.259486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:41.099 [2024-12-13 08:23:53.259609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.099 [2024-12-13 08:23:53.259658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:41.099 [2024-12-13 08:23:53.259693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.099 [2024-12-13 08:23:53.261934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.099 [2024-12-13 08:23:53.262011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:41.099 [2024-12-13 08:23:53.262132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:41.099 [2024-12-13 08:23:53.262239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.099 [2024-12-13 08:23:53.262436] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:41.099 [2024-12-13 08:23:53.262504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.099 [2024-12-13 08:23:53.262541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:41.099 [2024-12-13 08:23:53.262689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.099 [2024-12-13 08:23:53.262825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:41.099 pt1 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.099 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.100 "name": "raid_bdev1", 00:12:41.100 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:41.100 "strip_size_kb": 0, 00:12:41.100 "state": "configuring", 00:12:41.100 "raid_level": "raid1", 00:12:41.100 "superblock": true, 00:12:41.100 "num_base_bdevs": 4, 00:12:41.100 "num_base_bdevs_discovered": 2, 00:12:41.100 "num_base_bdevs_operational": 3, 00:12:41.100 "base_bdevs_list": [ 00:12:41.100 { 00:12:41.100 "name": null, 00:12:41.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.100 "is_configured": false, 00:12:41.100 "data_offset": 2048, 00:12:41.100 "data_size": 63488 00:12:41.100 }, 00:12:41.100 { 00:12:41.100 "name": "pt2", 00:12:41.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.100 "is_configured": true, 00:12:41.100 "data_offset": 2048, 00:12:41.100 "data_size": 63488 00:12:41.100 }, 00:12:41.100 { 00:12:41.100 "name": "pt3", 00:12:41.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.100 "is_configured": true, 00:12:41.100 "data_offset": 2048, 00:12:41.100 "data_size": 63488 00:12:41.100 }, 00:12:41.100 { 00:12:41.100 "name": null, 00:12:41.100 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.100 "is_configured": false, 00:12:41.100 "data_offset": 2048, 00:12:41.100 "data_size": 63488 00:12:41.100 } 00:12:41.100 ] 00:12:41.100 }' 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.100 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 [2024-12-13 08:23:53.790626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:41.668 [2024-12-13 08:23:53.790737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.668 [2024-12-13 08:23:53.790789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:41.668 [2024-12-13 08:23:53.790803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.668 [2024-12-13 08:23:53.791351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.668 [2024-12-13 08:23:53.791379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:41.668 [2024-12-13 08:23:53.791481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:41.668 [2024-12-13 08:23:53.791505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:41.668 [2024-12-13 08:23:53.791662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:41.668 [2024-12-13 08:23:53.791676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.668 [2024-12-13 08:23:53.791956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:41.668 [2024-12-13 08:23:53.792147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:41.668 [2024-12-13 08:23:53.792161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:41.668 [2024-12-13 08:23:53.792317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.668 pt4 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.668 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.669 "name": "raid_bdev1", 00:12:41.669 "uuid": "c629cd2f-75e6-4a31-b50f-ea207b58420e", 00:12:41.669 "strip_size_kb": 0, 00:12:41.669 "state": "online", 00:12:41.669 "raid_level": "raid1", 00:12:41.669 "superblock": true, 00:12:41.669 "num_base_bdevs": 4, 00:12:41.669 "num_base_bdevs_discovered": 3, 00:12:41.669 "num_base_bdevs_operational": 3, 00:12:41.669 "base_bdevs_list": [ 00:12:41.669 { 00:12:41.669 "name": null, 00:12:41.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.669 "is_configured": false, 00:12:41.669 "data_offset": 2048, 00:12:41.669 "data_size": 63488 00:12:41.669 }, 00:12:41.669 { 00:12:41.669 "name": "pt2", 00:12:41.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.669 "is_configured": true, 00:12:41.669 "data_offset": 2048, 00:12:41.669 "data_size": 63488 00:12:41.669 }, 00:12:41.669 { 00:12:41.669 "name": "pt3", 00:12:41.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.669 "is_configured": true, 00:12:41.669 "data_offset": 2048, 00:12:41.669 "data_size": 63488 00:12:41.669 }, 00:12:41.669 { 00:12:41.669 "name": "pt4", 00:12:41.669 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.669 "is_configured": true, 00:12:41.669 "data_offset": 2048, 00:12:41.669 "data_size": 63488 00:12:41.669 } 00:12:41.669 ] 00:12:41.669 }' 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.669 08:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.927 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.927 [2024-12-13 08:23:54.290138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c629cd2f-75e6-4a31-b50f-ea207b58420e '!=' c629cd2f-75e6-4a31-b50f-ea207b58420e ']' 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74713 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74713 ']' 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74713 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74713 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74713' 00:12:42.186 killing process with pid 74713 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74713 00:12:42.186 [2024-12-13 08:23:54.367764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.186 [2024-12-13 08:23:54.367867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.186 08:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74713 00:12:42.186 [2024-12-13 08:23:54.367955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.186 [2024-12-13 08:23:54.367968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:42.445 [2024-12-13 08:23:54.775404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.829 08:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:43.829 00:12:43.829 real 0m8.551s 00:12:43.829 user 0m13.456s 00:12:43.829 sys 0m1.555s 00:12:43.829 08:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.829 08:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.829 ************************************ 00:12:43.829 END TEST raid_superblock_test 00:12:43.829 ************************************ 00:12:43.829 08:23:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:43.829 08:23:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:43.829 08:23:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.829 08:23:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.829 ************************************ 00:12:43.829 START TEST raid_read_error_test 00:12:43.829 ************************************ 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:43.829 08:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MfycalbBQX 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75200 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75200 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75200 ']' 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.829 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.829 [2024-12-13 08:23:56.096659] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:43.829 [2024-12-13 08:23:56.096870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75200 ] 00:12:44.088 [2024-12-13 08:23:56.252414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.088 [2024-12-13 08:23:56.370154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.347 [2024-12-13 08:23:56.579182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.347 [2024-12-13 08:23:56.579218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.607 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.607 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:44.607 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.607 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.607 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.607 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 BaseBdev1_malloc 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 true 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 [2024-12-13 08:23:56.999811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.868 [2024-12-13 08:23:56.999932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.868 [2024-12-13 08:23:56.999970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:44.868 [2024-12-13 08:23:57.000001] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.868 [2024-12-13 08:23:57.002073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.868 [2024-12-13 08:23:57.002163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.868 BaseBdev1 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 BaseBdev2_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 true 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 [2024-12-13 08:23:57.067510] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:44.868 [2024-12-13 08:23:57.067627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.868 [2024-12-13 08:23:57.067664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:44.868 [2024-12-13 08:23:57.067695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.868 [2024-12-13 08:23:57.069925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.868 [2024-12-13 08:23:57.069966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.868 BaseBdev2 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 BaseBdev3_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 true 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 [2024-12-13 08:23:57.148989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.868 [2024-12-13 08:23:57.149152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.868 [2024-12-13 08:23:57.149203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.868 [2024-12-13 08:23:57.149263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.868 [2024-12-13 08:23:57.151620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.868 [2024-12-13 08:23:57.151718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.868 BaseBdev3 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 BaseBdev4_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 true 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 [2024-12-13 08:23:57.216628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:44.868 [2024-12-13 08:23:57.216733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.868 [2024-12-13 08:23:57.216771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.868 [2024-12-13 08:23:57.216801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.868 [2024-12-13 08:23:57.219109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.868 [2024-12-13 08:23:57.219195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.868 BaseBdev4 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.868 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.868 [2024-12-13 08:23:57.228730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.868 [2024-12-13 08:23:57.230874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.869 [2024-12-13 08:23:57.230966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.869 [2024-12-13 08:23:57.231050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:45.129 [2024-12-13 08:23:57.231357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:45.129 [2024-12-13 08:23:57.231383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.129 [2024-12-13 08:23:57.231698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:45.129 [2024-12-13 08:23:57.231905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:45.129 [2024-12-13 08:23:57.231915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:45.129 [2024-12-13 08:23:57.232211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.129 "name": "raid_bdev1", 00:12:45.129 "uuid": "1690ff52-7c69-49ad-a36b-4d88548a0887", 00:12:45.129 "strip_size_kb": 0, 00:12:45.129 "state": "online", 00:12:45.129 "raid_level": "raid1", 00:12:45.129 "superblock": true, 00:12:45.129 "num_base_bdevs": 4, 00:12:45.129 "num_base_bdevs_discovered": 4, 00:12:45.129 "num_base_bdevs_operational": 4, 00:12:45.129 "base_bdevs_list": [ 00:12:45.129 { 00:12:45.129 "name": "BaseBdev1", 00:12:45.129 "uuid": "49026c4d-5b7f-5f56-ae35-6dd008060f59", 00:12:45.129 "is_configured": true, 00:12:45.129 "data_offset": 2048, 00:12:45.129 "data_size": 63488 00:12:45.129 }, 00:12:45.129 { 00:12:45.129 "name": "BaseBdev2", 00:12:45.129 "uuid": "1b7f40d0-45e3-5fee-bf29-ff439fdaec42", 00:12:45.129 "is_configured": true, 00:12:45.129 "data_offset": 2048, 00:12:45.129 "data_size": 63488 00:12:45.129 }, 00:12:45.129 { 00:12:45.129 "name": "BaseBdev3", 00:12:45.129 "uuid": "8cd8386e-e427-5060-bda4-f22b7630224a", 00:12:45.129 "is_configured": true, 00:12:45.129 "data_offset": 2048, 00:12:45.129 "data_size": 63488 00:12:45.129 }, 00:12:45.129 { 00:12:45.129 "name": "BaseBdev4", 00:12:45.129 "uuid": "8124447f-cb8d-5368-9e6d-2731c4a3fb84", 00:12:45.129 "is_configured": true, 00:12:45.129 "data_offset": 2048, 00:12:45.129 "data_size": 63488 00:12:45.129 } 00:12:45.129 ] 00:12:45.129 }' 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.129 08:23:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.389 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.389 08:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:45.648 [2024-12-13 08:23:57.809038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.587 "name": "raid_bdev1", 00:12:46.587 "uuid": "1690ff52-7c69-49ad-a36b-4d88548a0887", 00:12:46.587 "strip_size_kb": 0, 00:12:46.587 "state": "online", 00:12:46.587 "raid_level": "raid1", 00:12:46.587 "superblock": true, 00:12:46.587 "num_base_bdevs": 4, 00:12:46.587 "num_base_bdevs_discovered": 4, 00:12:46.587 "num_base_bdevs_operational": 4, 00:12:46.587 "base_bdevs_list": [ 00:12:46.587 { 00:12:46.587 "name": "BaseBdev1", 00:12:46.587 "uuid": "49026c4d-5b7f-5f56-ae35-6dd008060f59", 00:12:46.587 "is_configured": true, 00:12:46.587 "data_offset": 2048, 00:12:46.587 "data_size": 63488 00:12:46.587 }, 00:12:46.587 { 00:12:46.587 "name": "BaseBdev2", 00:12:46.587 "uuid": "1b7f40d0-45e3-5fee-bf29-ff439fdaec42", 00:12:46.587 "is_configured": true, 00:12:46.587 "data_offset": 2048, 00:12:46.587 "data_size": 63488 00:12:46.587 }, 00:12:46.587 { 00:12:46.587 "name": "BaseBdev3", 00:12:46.587 "uuid": "8cd8386e-e427-5060-bda4-f22b7630224a", 00:12:46.587 "is_configured": true, 00:12:46.587 "data_offset": 2048, 00:12:46.587 "data_size": 63488 00:12:46.587 }, 00:12:46.587 { 00:12:46.587 "name": "BaseBdev4", 00:12:46.587 "uuid": "8124447f-cb8d-5368-9e6d-2731c4a3fb84", 00:12:46.587 "is_configured": true, 00:12:46.587 "data_offset": 2048, 00:12:46.587 "data_size": 63488 00:12:46.587 } 00:12:46.587 ] 00:12:46.587 }' 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.587 08:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.846 [2024-12-13 08:23:59.190027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.846 [2024-12-13 08:23:59.190159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.846 [2024-12-13 08:23:59.193510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.846 [2024-12-13 08:23:59.193624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.846 [2024-12-13 08:23:59.193779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.846 [2024-12-13 08:23:59.193835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:46.846 { 00:12:46.846 "results": [ 00:12:46.846 { 00:12:46.846 "job": "raid_bdev1", 00:12:46.846 "core_mask": "0x1", 00:12:46.846 "workload": "randrw", 00:12:46.846 "percentage": 50, 00:12:46.846 "status": "finished", 00:12:46.846 "queue_depth": 1, 00:12:46.846 "io_size": 131072, 00:12:46.846 "runtime": 1.38208, 00:12:46.846 "iops": 10275.816161148414, 00:12:46.846 "mibps": 1284.4770201435517, 00:12:46.846 "io_failed": 0, 00:12:46.846 "io_timeout": 0, 00:12:46.846 "avg_latency_us": 94.4672664960775, 00:12:46.846 "min_latency_us": 24.593886462882097, 00:12:46.846 "max_latency_us": 1538.235807860262 00:12:46.846 } 00:12:46.846 ], 00:12:46.846 "core_count": 1 00:12:46.846 } 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75200 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75200 ']' 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75200 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:46.846 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.106 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75200 00:12:47.106 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.106 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.106 killing process with pid 75200 00:12:47.106 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75200' 00:12:47.106 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75200 00:12:47.106 [2024-12-13 08:23:59.241042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.106 08:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75200 00:12:47.364 [2024-12-13 08:23:59.581008] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MfycalbBQX 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:48.743 00:12:48.743 real 0m4.837s 00:12:48.743 user 0m5.693s 00:12:48.743 sys 0m0.617s 00:12:48.743 ************************************ 00:12:48.743 END TEST raid_read_error_test 00:12:48.743 ************************************ 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.743 08:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 08:24:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:48.743 08:24:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.743 08:24:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.743 08:24:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 ************************************ 00:12:48.743 START TEST raid_write_error_test 00:12:48.743 ************************************ 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.keRFWLqHUn 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75350 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75350 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75350 ']' 00:12:48.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.743 08:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.743 [2024-12-13 08:24:01.037732] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:48.743 [2024-12-13 08:24:01.037938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75350 ] 00:12:49.002 [2024-12-13 08:24:01.213474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.002 [2024-12-13 08:24:01.340591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.260 [2024-12-13 08:24:01.547022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.260 [2024-12-13 08:24:01.547090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.519 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.519 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:49.519 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.519 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.519 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.519 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 BaseBdev1_malloc 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 true 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 [2024-12-13 08:24:01.936922] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:49.779 [2024-12-13 08:24:01.937035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.779 [2024-12-13 08:24:01.937073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:49.779 [2024-12-13 08:24:01.937119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.779 [2024-12-13 08:24:01.939167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.779 [2024-12-13 08:24:01.939241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.779 BaseBdev1 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 BaseBdev2_malloc 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 true 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 [2024-12-13 08:24:02.001592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:49.779 [2024-12-13 08:24:02.001686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.779 [2024-12-13 08:24:02.001705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:49.779 [2024-12-13 08:24:02.001716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.779 [2024-12-13 08:24:02.003781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.779 [2024-12-13 08:24:02.003821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.779 BaseBdev2 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 BaseBdev3_malloc 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 true 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 [2024-12-13 08:24:02.084561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:49.779 [2024-12-13 08:24:02.084657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.779 [2024-12-13 08:24:02.084692] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:49.779 [2024-12-13 08:24:02.084721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.779 [2024-12-13 08:24:02.086775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.779 [2024-12-13 08:24:02.086854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:49.779 BaseBdev3 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 BaseBdev4_malloc 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.039 true 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.039 [2024-12-13 08:24:02.153435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:50.039 [2024-12-13 08:24:02.153542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.039 [2024-12-13 08:24:02.153602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:50.039 [2024-12-13 08:24:02.153635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.039 [2024-12-13 08:24:02.155787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.039 [2024-12-13 08:24:02.155865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:50.039 BaseBdev4 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.039 [2024-12-13 08:24:02.165460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.039 [2024-12-13 08:24:02.167364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.039 [2024-12-13 08:24:02.167480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.039 [2024-12-13 08:24:02.167573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:50.039 [2024-12-13 08:24:02.167837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:50.039 [2024-12-13 08:24:02.167889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.039 [2024-12-13 08:24:02.168160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:50.039 [2024-12-13 08:24:02.168377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:50.039 [2024-12-13 08:24:02.168418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:50.039 [2024-12-13 08:24:02.168609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.039 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.040 "name": "raid_bdev1", 00:12:50.040 "uuid": "3511740e-dd0f-47e7-ac75-fab1e115792b", 00:12:50.040 "strip_size_kb": 0, 00:12:50.040 "state": "online", 00:12:50.040 "raid_level": "raid1", 00:12:50.040 "superblock": true, 00:12:50.040 "num_base_bdevs": 4, 00:12:50.040 "num_base_bdevs_discovered": 4, 00:12:50.040 "num_base_bdevs_operational": 4, 00:12:50.040 "base_bdevs_list": [ 00:12:50.040 { 00:12:50.040 "name": "BaseBdev1", 00:12:50.040 "uuid": "55d1f372-3dcf-5e7d-abf6-eeab336373db", 00:12:50.040 "is_configured": true, 00:12:50.040 "data_offset": 2048, 00:12:50.040 "data_size": 63488 00:12:50.040 }, 00:12:50.040 { 00:12:50.040 "name": "BaseBdev2", 00:12:50.040 "uuid": "51c217a0-b4f0-5894-ac4b-21a8261aeea3", 00:12:50.040 "is_configured": true, 00:12:50.040 "data_offset": 2048, 00:12:50.040 "data_size": 63488 00:12:50.040 }, 00:12:50.040 { 00:12:50.040 "name": "BaseBdev3", 00:12:50.040 "uuid": "b0f99d30-49e5-5d06-b91c-16f3999ecffc", 00:12:50.040 "is_configured": true, 00:12:50.040 "data_offset": 2048, 00:12:50.040 "data_size": 63488 00:12:50.040 }, 00:12:50.040 { 00:12:50.040 "name": "BaseBdev4", 00:12:50.040 "uuid": "9c054337-2f67-57c0-8551-d4b50b7b8317", 00:12:50.040 "is_configured": true, 00:12:50.040 "data_offset": 2048, 00:12:50.040 "data_size": 63488 00:12:50.040 } 00:12:50.040 ] 00:12:50.040 }' 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.040 08:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.299 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:50.299 08:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.558 [2024-12-13 08:24:02.673845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.496 [2024-12-13 08:24:03.587853] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:51.496 [2024-12-13 08:24:03.587993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.496 [2024-12-13 08:24:03.588285] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.496 "name": "raid_bdev1", 00:12:51.496 "uuid": "3511740e-dd0f-47e7-ac75-fab1e115792b", 00:12:51.496 "strip_size_kb": 0, 00:12:51.496 "state": "online", 00:12:51.496 "raid_level": "raid1", 00:12:51.496 "superblock": true, 00:12:51.496 "num_base_bdevs": 4, 00:12:51.496 "num_base_bdevs_discovered": 3, 00:12:51.496 "num_base_bdevs_operational": 3, 00:12:51.496 "base_bdevs_list": [ 00:12:51.496 { 00:12:51.496 "name": null, 00:12:51.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.496 "is_configured": false, 00:12:51.496 "data_offset": 0, 00:12:51.496 "data_size": 63488 00:12:51.496 }, 00:12:51.496 { 00:12:51.496 "name": "BaseBdev2", 00:12:51.496 "uuid": "51c217a0-b4f0-5894-ac4b-21a8261aeea3", 00:12:51.496 "is_configured": true, 00:12:51.496 "data_offset": 2048, 00:12:51.496 "data_size": 63488 00:12:51.496 }, 00:12:51.496 { 00:12:51.496 "name": "BaseBdev3", 00:12:51.496 "uuid": "b0f99d30-49e5-5d06-b91c-16f3999ecffc", 00:12:51.496 "is_configured": true, 00:12:51.496 "data_offset": 2048, 00:12:51.496 "data_size": 63488 00:12:51.496 }, 00:12:51.496 { 00:12:51.496 "name": "BaseBdev4", 00:12:51.496 "uuid": "9c054337-2f67-57c0-8551-d4b50b7b8317", 00:12:51.496 "is_configured": true, 00:12:51.496 "data_offset": 2048, 00:12:51.496 "data_size": 63488 00:12:51.496 } 00:12:51.496 ] 00:12:51.496 }' 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.496 08:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.756 [2024-12-13 08:24:04.055923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.756 [2024-12-13 08:24:04.056015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.756 [2024-12-13 08:24:04.058705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.756 [2024-12-13 08:24:04.058789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.756 [2024-12-13 08:24:04.058910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.756 [2024-12-13 08:24:04.058954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:51.756 { 00:12:51.756 "results": [ 00:12:51.756 { 00:12:51.756 "job": "raid_bdev1", 00:12:51.756 "core_mask": "0x1", 00:12:51.756 "workload": "randrw", 00:12:51.756 "percentage": 50, 00:12:51.756 "status": "finished", 00:12:51.756 "queue_depth": 1, 00:12:51.756 "io_size": 131072, 00:12:51.756 "runtime": 1.383024, 00:12:51.756 "iops": 11085.129397609875, 00:12:51.756 "mibps": 1385.6411747012344, 00:12:51.756 "io_failed": 0, 00:12:51.756 "io_timeout": 0, 00:12:51.756 "avg_latency_us": 87.41271533915784, 00:12:51.756 "min_latency_us": 24.034934497816593, 00:12:51.756 "max_latency_us": 1502.46288209607 00:12:51.756 } 00:12:51.756 ], 00:12:51.756 "core_count": 1 00:12:51.756 } 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75350 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75350 ']' 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75350 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75350 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75350' 00:12:51.756 killing process with pid 75350 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75350 00:12:51.756 [2024-12-13 08:24:04.093998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.756 08:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75350 00:12:52.323 [2024-12-13 08:24:04.442929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.keRFWLqHUn 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:53.699 ************************************ 00:12:53.699 END TEST raid_write_error_test 00:12:53.699 ************************************ 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:53.699 00:12:53.699 real 0m4.741s 00:12:53.699 user 0m5.590s 00:12:53.699 sys 0m0.581s 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.699 08:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.699 08:24:05 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:53.699 08:24:05 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:53.699 08:24:05 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:53.699 08:24:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:53.699 08:24:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.699 08:24:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.699 ************************************ 00:12:53.699 START TEST raid_rebuild_test 00:12:53.699 ************************************ 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.699 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75495 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75495 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75495 ']' 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.700 08:24:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.700 [2024-12-13 08:24:05.810778] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:53.700 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:53.700 Zero copy mechanism will not be used. 00:12:53.700 [2024-12-13 08:24:05.810987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75495 ] 00:12:53.700 [2024-12-13 08:24:05.986012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.959 [2024-12-13 08:24:06.103526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.959 [2024-12-13 08:24:06.303747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.959 [2024-12-13 08:24:06.303813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 BaseBdev1_malloc 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 [2024-12-13 08:24:06.703202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:54.528 [2024-12-13 08:24:06.703343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.528 [2024-12-13 08:24:06.703373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.528 [2024-12-13 08:24:06.703386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.528 [2024-12-13 08:24:06.705657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.528 [2024-12-13 08:24:06.705700] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.528 BaseBdev1 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 BaseBdev2_malloc 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 [2024-12-13 08:24:06.757270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:54.528 [2024-12-13 08:24:06.757401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.528 [2024-12-13 08:24:06.757438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.528 [2024-12-13 08:24:06.757468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.528 [2024-12-13 08:24:06.759550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.528 [2024-12-13 08:24:06.759628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.528 BaseBdev2 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 spare_malloc 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 spare_delay 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 [2024-12-13 08:24:06.837890] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.528 [2024-12-13 08:24:06.838006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.528 [2024-12-13 08:24:06.838045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:54.528 [2024-12-13 08:24:06.838075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.528 [2024-12-13 08:24:06.840269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.528 [2024-12-13 08:24:06.840346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.528 spare 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 [2024-12-13 08:24:06.849918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.528 [2024-12-13 08:24:06.851791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.528 [2024-12-13 08:24:06.851937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.528 [2024-12-13 08:24:06.851998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:54.528 [2024-12-13 08:24:06.852317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:54.528 [2024-12-13 08:24:06.852530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.528 [2024-12-13 08:24:06.852571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.528 [2024-12-13 08:24:06.852781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.788 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.788 "name": "raid_bdev1", 00:12:54.788 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:12:54.788 "strip_size_kb": 0, 00:12:54.788 "state": "online", 00:12:54.788 "raid_level": "raid1", 00:12:54.788 "superblock": false, 00:12:54.788 "num_base_bdevs": 2, 00:12:54.788 "num_base_bdevs_discovered": 2, 00:12:54.788 "num_base_bdevs_operational": 2, 00:12:54.788 "base_bdevs_list": [ 00:12:54.788 { 00:12:54.788 "name": "BaseBdev1", 00:12:54.788 "uuid": "56400eef-1b77-5438-9255-bfad2bb6a7a2", 00:12:54.788 "is_configured": true, 00:12:54.788 "data_offset": 0, 00:12:54.788 "data_size": 65536 00:12:54.788 }, 00:12:54.788 { 00:12:54.788 "name": "BaseBdev2", 00:12:54.788 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:12:54.788 "is_configured": true, 00:12:54.788 "data_offset": 0, 00:12:54.788 "data_size": 65536 00:12:54.788 } 00:12:54.788 ] 00:12:54.788 }' 00:12:54.788 08:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.788 08:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.048 [2024-12-13 08:24:07.245523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.048 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:55.307 [2024-12-13 08:24:07.536780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:55.307 /dev/nbd0 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.307 1+0 records in 00:12:55.307 1+0 records out 00:12:55.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061039 s, 6.7 MB/s 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:55.307 08:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:00.608 65536+0 records in 00:13:00.608 65536+0 records out 00:13:00.608 33554432 bytes (34 MB, 32 MiB) copied, 4.87155 s, 6.9 MB/s 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.608 [2024-12-13 08:24:12.726373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.608 [2024-12-13 08:24:12.738468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.608 "name": "raid_bdev1", 00:13:00.608 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:00.608 "strip_size_kb": 0, 00:13:00.608 "state": "online", 00:13:00.608 "raid_level": "raid1", 00:13:00.608 "superblock": false, 00:13:00.608 "num_base_bdevs": 2, 00:13:00.608 "num_base_bdevs_discovered": 1, 00:13:00.608 "num_base_bdevs_operational": 1, 00:13:00.608 "base_bdevs_list": [ 00:13:00.608 { 00:13:00.608 "name": null, 00:13:00.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.608 "is_configured": false, 00:13:00.608 "data_offset": 0, 00:13:00.608 "data_size": 65536 00:13:00.608 }, 00:13:00.608 { 00:13:00.608 "name": "BaseBdev2", 00:13:00.608 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:00.608 "is_configured": true, 00:13:00.608 "data_offset": 0, 00:13:00.608 "data_size": 65536 00:13:00.608 } 00:13:00.608 ] 00:13:00.608 }' 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.608 08:24:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.867 08:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.867 08:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.867 08:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.867 [2024-12-13 08:24:13.205660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.867 [2024-12-13 08:24:13.222579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:00.867 08:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.867 08:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:00.867 [2024-12-13 08:24:13.224521] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.241 "name": "raid_bdev1", 00:13:02.241 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:02.241 "strip_size_kb": 0, 00:13:02.241 "state": "online", 00:13:02.241 "raid_level": "raid1", 00:13:02.241 "superblock": false, 00:13:02.241 "num_base_bdevs": 2, 00:13:02.241 "num_base_bdevs_discovered": 2, 00:13:02.241 "num_base_bdevs_operational": 2, 00:13:02.241 "process": { 00:13:02.241 "type": "rebuild", 00:13:02.241 "target": "spare", 00:13:02.241 "progress": { 00:13:02.241 "blocks": 20480, 00:13:02.241 "percent": 31 00:13:02.241 } 00:13:02.241 }, 00:13:02.241 "base_bdevs_list": [ 00:13:02.241 { 00:13:02.241 "name": "spare", 00:13:02.241 "uuid": "183016b0-4de9-5570-9523-2052ce612dd6", 00:13:02.241 "is_configured": true, 00:13:02.241 "data_offset": 0, 00:13:02.241 "data_size": 65536 00:13:02.241 }, 00:13:02.241 { 00:13:02.241 "name": "BaseBdev2", 00:13:02.241 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:02.241 "is_configured": true, 00:13:02.241 "data_offset": 0, 00:13:02.241 "data_size": 65536 00:13:02.241 } 00:13:02.241 ] 00:13:02.241 }' 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.241 [2024-12-13 08:24:14.372137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.241 [2024-12-13 08:24:14.430485] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.241 [2024-12-13 08:24:14.430601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.241 [2024-12-13 08:24:14.430620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.241 [2024-12-13 08:24:14.430632] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.241 "name": "raid_bdev1", 00:13:02.241 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:02.241 "strip_size_kb": 0, 00:13:02.241 "state": "online", 00:13:02.241 "raid_level": "raid1", 00:13:02.241 "superblock": false, 00:13:02.241 "num_base_bdevs": 2, 00:13:02.241 "num_base_bdevs_discovered": 1, 00:13:02.241 "num_base_bdevs_operational": 1, 00:13:02.241 "base_bdevs_list": [ 00:13:02.241 { 00:13:02.241 "name": null, 00:13:02.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.241 "is_configured": false, 00:13:02.241 "data_offset": 0, 00:13:02.241 "data_size": 65536 00:13:02.241 }, 00:13:02.241 { 00:13:02.241 "name": "BaseBdev2", 00:13:02.241 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:02.241 "is_configured": true, 00:13:02.241 "data_offset": 0, 00:13:02.241 "data_size": 65536 00:13:02.241 } 00:13:02.241 ] 00:13:02.241 }' 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.241 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.807 "name": "raid_bdev1", 00:13:02.807 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:02.807 "strip_size_kb": 0, 00:13:02.807 "state": "online", 00:13:02.807 "raid_level": "raid1", 00:13:02.807 "superblock": false, 00:13:02.807 "num_base_bdevs": 2, 00:13:02.807 "num_base_bdevs_discovered": 1, 00:13:02.807 "num_base_bdevs_operational": 1, 00:13:02.807 "base_bdevs_list": [ 00:13:02.807 { 00:13:02.807 "name": null, 00:13:02.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.807 "is_configured": false, 00:13:02.807 "data_offset": 0, 00:13:02.807 "data_size": 65536 00:13:02.807 }, 00:13:02.807 { 00:13:02.807 "name": "BaseBdev2", 00:13:02.807 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:02.807 "is_configured": true, 00:13:02.807 "data_offset": 0, 00:13:02.807 "data_size": 65536 00:13:02.807 } 00:13:02.807 ] 00:13:02.807 }' 00:13:02.807 08:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.807 08:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.807 08:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.807 08:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.807 08:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.807 08:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.808 08:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.808 [2024-12-13 08:24:15.090272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.808 [2024-12-13 08:24:15.110030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:02.808 08:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.808 08:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:02.808 [2024-12-13 08:24:15.112165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.182 "name": "raid_bdev1", 00:13:04.182 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:04.182 "strip_size_kb": 0, 00:13:04.182 "state": "online", 00:13:04.182 "raid_level": "raid1", 00:13:04.182 "superblock": false, 00:13:04.182 "num_base_bdevs": 2, 00:13:04.182 "num_base_bdevs_discovered": 2, 00:13:04.182 "num_base_bdevs_operational": 2, 00:13:04.182 "process": { 00:13:04.182 "type": "rebuild", 00:13:04.182 "target": "spare", 00:13:04.182 "progress": { 00:13:04.182 "blocks": 20480, 00:13:04.182 "percent": 31 00:13:04.182 } 00:13:04.182 }, 00:13:04.182 "base_bdevs_list": [ 00:13:04.182 { 00:13:04.182 "name": "spare", 00:13:04.182 "uuid": "183016b0-4de9-5570-9523-2052ce612dd6", 00:13:04.182 "is_configured": true, 00:13:04.182 "data_offset": 0, 00:13:04.182 "data_size": 65536 00:13:04.182 }, 00:13:04.182 { 00:13:04.182 "name": "BaseBdev2", 00:13:04.182 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:04.182 "is_configured": true, 00:13:04.182 "data_offset": 0, 00:13:04.182 "data_size": 65536 00:13:04.182 } 00:13:04.182 ] 00:13:04.182 }' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=377 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.182 "name": "raid_bdev1", 00:13:04.182 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:04.182 "strip_size_kb": 0, 00:13:04.182 "state": "online", 00:13:04.182 "raid_level": "raid1", 00:13:04.182 "superblock": false, 00:13:04.182 "num_base_bdevs": 2, 00:13:04.182 "num_base_bdevs_discovered": 2, 00:13:04.182 "num_base_bdevs_operational": 2, 00:13:04.182 "process": { 00:13:04.182 "type": "rebuild", 00:13:04.182 "target": "spare", 00:13:04.182 "progress": { 00:13:04.182 "blocks": 22528, 00:13:04.182 "percent": 34 00:13:04.182 } 00:13:04.182 }, 00:13:04.182 "base_bdevs_list": [ 00:13:04.182 { 00:13:04.182 "name": "spare", 00:13:04.182 "uuid": "183016b0-4de9-5570-9523-2052ce612dd6", 00:13:04.182 "is_configured": true, 00:13:04.182 "data_offset": 0, 00:13:04.182 "data_size": 65536 00:13:04.182 }, 00:13:04.182 { 00:13:04.182 "name": "BaseBdev2", 00:13:04.182 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:04.182 "is_configured": true, 00:13:04.182 "data_offset": 0, 00:13:04.182 "data_size": 65536 00:13:04.182 } 00:13:04.182 ] 00:13:04.182 }' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.182 08:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.115 "name": "raid_bdev1", 00:13:05.115 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:05.115 "strip_size_kb": 0, 00:13:05.115 "state": "online", 00:13:05.115 "raid_level": "raid1", 00:13:05.115 "superblock": false, 00:13:05.115 "num_base_bdevs": 2, 00:13:05.115 "num_base_bdevs_discovered": 2, 00:13:05.115 "num_base_bdevs_operational": 2, 00:13:05.115 "process": { 00:13:05.115 "type": "rebuild", 00:13:05.115 "target": "spare", 00:13:05.115 "progress": { 00:13:05.115 "blocks": 45056, 00:13:05.115 "percent": 68 00:13:05.115 } 00:13:05.115 }, 00:13:05.115 "base_bdevs_list": [ 00:13:05.115 { 00:13:05.115 "name": "spare", 00:13:05.115 "uuid": "183016b0-4de9-5570-9523-2052ce612dd6", 00:13:05.115 "is_configured": true, 00:13:05.115 "data_offset": 0, 00:13:05.115 "data_size": 65536 00:13:05.115 }, 00:13:05.115 { 00:13:05.115 "name": "BaseBdev2", 00:13:05.115 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:05.115 "is_configured": true, 00:13:05.115 "data_offset": 0, 00:13:05.115 "data_size": 65536 00:13:05.115 } 00:13:05.115 ] 00:13:05.115 }' 00:13:05.115 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.374 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.374 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.374 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.374 08:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.307 [2024-12-13 08:24:18.327096] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:06.307 [2024-12-13 08:24:18.327201] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:06.307 [2024-12-13 08:24:18.327261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.307 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.307 "name": "raid_bdev1", 00:13:06.307 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:06.307 "strip_size_kb": 0, 00:13:06.307 "state": "online", 00:13:06.307 "raid_level": "raid1", 00:13:06.307 "superblock": false, 00:13:06.307 "num_base_bdevs": 2, 00:13:06.307 "num_base_bdevs_discovered": 2, 00:13:06.307 "num_base_bdevs_operational": 2, 00:13:06.307 "base_bdevs_list": [ 00:13:06.307 { 00:13:06.307 "name": "spare", 00:13:06.307 "uuid": "183016b0-4de9-5570-9523-2052ce612dd6", 00:13:06.307 "is_configured": true, 00:13:06.307 "data_offset": 0, 00:13:06.307 "data_size": 65536 00:13:06.307 }, 00:13:06.307 { 00:13:06.308 "name": "BaseBdev2", 00:13:06.308 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:06.308 "is_configured": true, 00:13:06.308 "data_offset": 0, 00:13:06.308 "data_size": 65536 00:13:06.308 } 00:13:06.308 ] 00:13:06.308 }' 00:13:06.308 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.308 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:06.308 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.566 "name": "raid_bdev1", 00:13:06.566 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:06.566 "strip_size_kb": 0, 00:13:06.566 "state": "online", 00:13:06.566 "raid_level": "raid1", 00:13:06.566 "superblock": false, 00:13:06.566 "num_base_bdevs": 2, 00:13:06.566 "num_base_bdevs_discovered": 2, 00:13:06.566 "num_base_bdevs_operational": 2, 00:13:06.566 "base_bdevs_list": [ 00:13:06.566 { 00:13:06.566 "name": "spare", 00:13:06.566 "uuid": "183016b0-4de9-5570-9523-2052ce612dd6", 00:13:06.566 "is_configured": true, 00:13:06.566 "data_offset": 0, 00:13:06.566 "data_size": 65536 00:13:06.566 }, 00:13:06.566 { 00:13:06.566 "name": "BaseBdev2", 00:13:06.566 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:06.566 "is_configured": true, 00:13:06.566 "data_offset": 0, 00:13:06.566 "data_size": 65536 00:13:06.566 } 00:13:06.566 ] 00:13:06.566 }' 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.566 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.566 "name": "raid_bdev1", 00:13:06.566 "uuid": "8f7c5579-6b3f-4cb4-b8ed-931f33ecf3a6", 00:13:06.566 "strip_size_kb": 0, 00:13:06.566 "state": "online", 00:13:06.566 "raid_level": "raid1", 00:13:06.566 "superblock": false, 00:13:06.566 "num_base_bdevs": 2, 00:13:06.566 "num_base_bdevs_discovered": 2, 00:13:06.566 "num_base_bdevs_operational": 2, 00:13:06.566 "base_bdevs_list": [ 00:13:06.566 { 00:13:06.566 "name": "spare", 00:13:06.566 "uuid": "183016b0-4de9-5570-9523-2052ce612dd6", 00:13:06.567 "is_configured": true, 00:13:06.567 "data_offset": 0, 00:13:06.567 "data_size": 65536 00:13:06.567 }, 00:13:06.567 { 00:13:06.567 "name": "BaseBdev2", 00:13:06.567 "uuid": "f2a9aaab-c1cd-580d-88f2-2c51878db2f6", 00:13:06.567 "is_configured": true, 00:13:06.567 "data_offset": 0, 00:13:06.567 "data_size": 65536 00:13:06.567 } 00:13:06.567 ] 00:13:06.567 }' 00:13:06.567 08:24:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.567 08:24:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.133 [2024-12-13 08:24:19.327171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.133 [2024-12-13 08:24:19.327210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.133 [2024-12-13 08:24:19.327309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.133 [2024-12-13 08:24:19.327394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.133 [2024-12-13 08:24:19.327443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:07.133 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:07.392 /dev/nbd0 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:07.392 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.393 1+0 records in 00:13:07.393 1+0 records out 00:13:07.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049382 s, 8.3 MB/s 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:07.393 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:07.666 /dev/nbd1 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.666 1+0 records in 00:13:07.666 1+0 records out 00:13:07.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245403 s, 16.7 MB/s 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:07.666 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.667 08:24:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:07.667 08:24:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.933 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75495 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75495 ']' 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75495 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75495 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.191 killing process with pid 75495 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75495' 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75495 00:13:08.191 Received shutdown signal, test time was about 60.000000 seconds 00:13:08.191 00:13:08.191 Latency(us) 00:13:08.191 [2024-12-13T08:24:20.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.191 [2024-12-13T08:24:20.556Z] =================================================================================================================== 00:13:08.191 [2024-12-13T08:24:20.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.191 08:24:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75495 00:13:08.191 [2024-12-13 08:24:20.519851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.758 [2024-12-13 08:24:20.827271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.692 08:24:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:09.692 00:13:09.692 real 0m16.221s 00:13:09.692 user 0m18.343s 00:13:09.692 sys 0m3.272s 00:13:09.692 08:24:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.692 08:24:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.692 ************************************ 00:13:09.692 END TEST raid_rebuild_test 00:13:09.692 ************************************ 00:13:09.692 08:24:21 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:09.692 08:24:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:09.692 08:24:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.692 08:24:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.693 ************************************ 00:13:09.693 START TEST raid_rebuild_test_sb 00:13:09.693 ************************************ 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:09.693 08:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75919 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75919 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75919 ']' 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.693 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.951 [2024-12-13 08:24:22.088855] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:13:09.951 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:09.951 Zero copy mechanism will not be used. 00:13:09.951 [2024-12-13 08:24:22.089390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75919 ] 00:13:09.951 [2024-12-13 08:24:22.262235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.211 [2024-12-13 08:24:22.373928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.211 [2024-12-13 08:24:22.570500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.211 [2024-12-13 08:24:22.570550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.779 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.779 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:10.779 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.779 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:10.779 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.779 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.779 BaseBdev1_malloc 00:13:10.779 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.780 [2024-12-13 08:24:22.992281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:10.780 [2024-12-13 08:24:22.992486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.780 [2024-12-13 08:24:22.992556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.780 [2024-12-13 08:24:22.992608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.780 [2024-12-13 08:24:22.994718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.780 [2024-12-13 08:24:22.994828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.780 BaseBdev1 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.780 08:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.780 BaseBdev2_malloc 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.780 [2024-12-13 08:24:23.046946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:10.780 [2024-12-13 08:24:23.047009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.780 [2024-12-13 08:24:23.047032] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.780 [2024-12-13 08:24:23.047043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.780 [2024-12-13 08:24:23.049070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.780 [2024-12-13 08:24:23.049119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:10.780 BaseBdev2 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.780 spare_malloc 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.780 spare_delay 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.780 [2024-12-13 08:24:23.128649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.780 [2024-12-13 08:24:23.128720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.780 [2024-12-13 08:24:23.128742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:10.780 [2024-12-13 08:24:23.128753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.780 [2024-12-13 08:24:23.131063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.780 [2024-12-13 08:24:23.131118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.780 spare 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.780 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.780 [2024-12-13 08:24:23.140686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.780 [2024-12-13 08:24:23.142653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.780 [2024-12-13 08:24:23.142857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:10.780 [2024-12-13 08:24:23.142874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.780 [2024-12-13 08:24:23.143161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:11.040 [2024-12-13 08:24:23.143363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:11.040 [2024-12-13 08:24:23.143381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:11.040 [2024-12-13 08:24:23.143560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.040 "name": "raid_bdev1", 00:13:11.040 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:11.040 "strip_size_kb": 0, 00:13:11.040 "state": "online", 00:13:11.040 "raid_level": "raid1", 00:13:11.040 "superblock": true, 00:13:11.040 "num_base_bdevs": 2, 00:13:11.040 "num_base_bdevs_discovered": 2, 00:13:11.040 "num_base_bdevs_operational": 2, 00:13:11.040 "base_bdevs_list": [ 00:13:11.040 { 00:13:11.040 "name": "BaseBdev1", 00:13:11.040 "uuid": "16a9a601-ae64-5faa-bf32-086a2b871897", 00:13:11.040 "is_configured": true, 00:13:11.040 "data_offset": 2048, 00:13:11.040 "data_size": 63488 00:13:11.040 }, 00:13:11.040 { 00:13:11.040 "name": "BaseBdev2", 00:13:11.040 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:11.040 "is_configured": true, 00:13:11.040 "data_offset": 2048, 00:13:11.040 "data_size": 63488 00:13:11.040 } 00:13:11.040 ] 00:13:11.040 }' 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.040 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.299 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.299 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.299 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.299 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.300 [2024-12-13 08:24:23.532309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.300 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:11.559 [2024-12-13 08:24:23.811598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:11.559 /dev/nbd0 00:13:11.559 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.559 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.559 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.559 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:11.559 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.559 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.560 1+0 records in 00:13:11.560 1+0 records out 00:13:11.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313069 s, 13.1 MB/s 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:11.560 08:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:15.751 63488+0 records in 00:13:15.751 63488+0 records out 00:13:15.751 32505856 bytes (33 MB, 31 MiB) copied, 4.15837 s, 7.8 MB/s 00:13:15.752 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:15.752 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.752 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.752 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.752 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:15.752 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.752 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.010 [2024-12-13 08:24:28.238780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.010 [2024-12-13 08:24:28.274807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.010 "name": "raid_bdev1", 00:13:16.010 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:16.010 "strip_size_kb": 0, 00:13:16.010 "state": "online", 00:13:16.010 "raid_level": "raid1", 00:13:16.010 "superblock": true, 00:13:16.010 "num_base_bdevs": 2, 00:13:16.010 "num_base_bdevs_discovered": 1, 00:13:16.010 "num_base_bdevs_operational": 1, 00:13:16.010 "base_bdevs_list": [ 00:13:16.010 { 00:13:16.010 "name": null, 00:13:16.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.010 "is_configured": false, 00:13:16.010 "data_offset": 0, 00:13:16.010 "data_size": 63488 00:13:16.010 }, 00:13:16.010 { 00:13:16.010 "name": "BaseBdev2", 00:13:16.010 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:16.010 "is_configured": true, 00:13:16.010 "data_offset": 2048, 00:13:16.010 "data_size": 63488 00:13:16.010 } 00:13:16.010 ] 00:13:16.010 }' 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.010 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.576 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.576 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.576 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.576 [2024-12-13 08:24:28.654246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.576 [2024-12-13 08:24:28.670927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:16.576 08:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.576 [2024-12-13 08:24:28.672849] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.576 08:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.513 "name": "raid_bdev1", 00:13:17.513 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:17.513 "strip_size_kb": 0, 00:13:17.513 "state": "online", 00:13:17.513 "raid_level": "raid1", 00:13:17.513 "superblock": true, 00:13:17.513 "num_base_bdevs": 2, 00:13:17.513 "num_base_bdevs_discovered": 2, 00:13:17.513 "num_base_bdevs_operational": 2, 00:13:17.513 "process": { 00:13:17.513 "type": "rebuild", 00:13:17.513 "target": "spare", 00:13:17.513 "progress": { 00:13:17.513 "blocks": 20480, 00:13:17.513 "percent": 32 00:13:17.513 } 00:13:17.513 }, 00:13:17.513 "base_bdevs_list": [ 00:13:17.513 { 00:13:17.513 "name": "spare", 00:13:17.513 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:17.513 "is_configured": true, 00:13:17.513 "data_offset": 2048, 00:13:17.513 "data_size": 63488 00:13:17.513 }, 00:13:17.513 { 00:13:17.513 "name": "BaseBdev2", 00:13:17.513 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:17.513 "is_configured": true, 00:13:17.513 "data_offset": 2048, 00:13:17.513 "data_size": 63488 00:13:17.513 } 00:13:17.513 ] 00:13:17.513 }' 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.513 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.513 [2024-12-13 08:24:29.808012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.773 [2024-12-13 08:24:29.878119] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.773 [2024-12-13 08:24:29.878194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.773 [2024-12-13 08:24:29.878209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.773 [2024-12-13 08:24:29.878222] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.773 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.773 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.773 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.774 "name": "raid_bdev1", 00:13:17.774 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:17.774 "strip_size_kb": 0, 00:13:17.774 "state": "online", 00:13:17.774 "raid_level": "raid1", 00:13:17.774 "superblock": true, 00:13:17.774 "num_base_bdevs": 2, 00:13:17.774 "num_base_bdevs_discovered": 1, 00:13:17.774 "num_base_bdevs_operational": 1, 00:13:17.774 "base_bdevs_list": [ 00:13:17.774 { 00:13:17.774 "name": null, 00:13:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.774 "is_configured": false, 00:13:17.774 "data_offset": 0, 00:13:17.774 "data_size": 63488 00:13:17.774 }, 00:13:17.774 { 00:13:17.774 "name": "BaseBdev2", 00:13:17.774 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:17.774 "is_configured": true, 00:13:17.774 "data_offset": 2048, 00:13:17.774 "data_size": 63488 00:13:17.774 } 00:13:17.774 ] 00:13:17.774 }' 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.774 08:24:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.034 "name": "raid_bdev1", 00:13:18.034 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:18.034 "strip_size_kb": 0, 00:13:18.034 "state": "online", 00:13:18.034 "raid_level": "raid1", 00:13:18.034 "superblock": true, 00:13:18.034 "num_base_bdevs": 2, 00:13:18.034 "num_base_bdevs_discovered": 1, 00:13:18.034 "num_base_bdevs_operational": 1, 00:13:18.034 "base_bdevs_list": [ 00:13:18.034 { 00:13:18.034 "name": null, 00:13:18.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.034 "is_configured": false, 00:13:18.034 "data_offset": 0, 00:13:18.034 "data_size": 63488 00:13:18.034 }, 00:13:18.034 { 00:13:18.034 "name": "BaseBdev2", 00:13:18.034 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:18.034 "is_configured": true, 00:13:18.034 "data_offset": 2048, 00:13:18.034 "data_size": 63488 00:13:18.034 } 00:13:18.034 ] 00:13:18.034 }' 00:13:18.034 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.293 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.293 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.293 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.293 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.293 08:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.293 08:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.293 [2024-12-13 08:24:30.473634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.293 [2024-12-13 08:24:30.489857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:18.293 08:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.294 08:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.294 [2024-12-13 08:24:30.491768] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.233 "name": "raid_bdev1", 00:13:19.233 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:19.233 "strip_size_kb": 0, 00:13:19.233 "state": "online", 00:13:19.233 "raid_level": "raid1", 00:13:19.233 "superblock": true, 00:13:19.233 "num_base_bdevs": 2, 00:13:19.233 "num_base_bdevs_discovered": 2, 00:13:19.233 "num_base_bdevs_operational": 2, 00:13:19.233 "process": { 00:13:19.233 "type": "rebuild", 00:13:19.233 "target": "spare", 00:13:19.233 "progress": { 00:13:19.233 "blocks": 20480, 00:13:19.233 "percent": 32 00:13:19.233 } 00:13:19.233 }, 00:13:19.233 "base_bdevs_list": [ 00:13:19.233 { 00:13:19.233 "name": "spare", 00:13:19.233 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:19.233 "is_configured": true, 00:13:19.233 "data_offset": 2048, 00:13:19.233 "data_size": 63488 00:13:19.233 }, 00:13:19.233 { 00:13:19.233 "name": "BaseBdev2", 00:13:19.233 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:19.233 "is_configured": true, 00:13:19.233 "data_offset": 2048, 00:13:19.233 "data_size": 63488 00:13:19.233 } 00:13:19.233 ] 00:13:19.233 }' 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.233 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:19.493 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.493 "name": "raid_bdev1", 00:13:19.493 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:19.493 "strip_size_kb": 0, 00:13:19.493 "state": "online", 00:13:19.493 "raid_level": "raid1", 00:13:19.493 "superblock": true, 00:13:19.493 "num_base_bdevs": 2, 00:13:19.493 "num_base_bdevs_discovered": 2, 00:13:19.493 "num_base_bdevs_operational": 2, 00:13:19.493 "process": { 00:13:19.493 "type": "rebuild", 00:13:19.493 "target": "spare", 00:13:19.493 "progress": { 00:13:19.493 "blocks": 22528, 00:13:19.493 "percent": 35 00:13:19.493 } 00:13:19.493 }, 00:13:19.493 "base_bdevs_list": [ 00:13:19.493 { 00:13:19.493 "name": "spare", 00:13:19.493 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:19.493 "is_configured": true, 00:13:19.493 "data_offset": 2048, 00:13:19.493 "data_size": 63488 00:13:19.493 }, 00:13:19.493 { 00:13:19.493 "name": "BaseBdev2", 00:13:19.493 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:19.493 "is_configured": true, 00:13:19.493 "data_offset": 2048, 00:13:19.493 "data_size": 63488 00:13:19.493 } 00:13:19.493 ] 00:13:19.493 }' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.493 08:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.431 08:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.692 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.692 "name": "raid_bdev1", 00:13:20.692 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:20.692 "strip_size_kb": 0, 00:13:20.692 "state": "online", 00:13:20.692 "raid_level": "raid1", 00:13:20.692 "superblock": true, 00:13:20.692 "num_base_bdevs": 2, 00:13:20.692 "num_base_bdevs_discovered": 2, 00:13:20.692 "num_base_bdevs_operational": 2, 00:13:20.692 "process": { 00:13:20.692 "type": "rebuild", 00:13:20.692 "target": "spare", 00:13:20.692 "progress": { 00:13:20.692 "blocks": 45056, 00:13:20.692 "percent": 70 00:13:20.692 } 00:13:20.692 }, 00:13:20.692 "base_bdevs_list": [ 00:13:20.692 { 00:13:20.692 "name": "spare", 00:13:20.692 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:20.692 "is_configured": true, 00:13:20.692 "data_offset": 2048, 00:13:20.692 "data_size": 63488 00:13:20.692 }, 00:13:20.692 { 00:13:20.692 "name": "BaseBdev2", 00:13:20.692 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:20.692 "is_configured": true, 00:13:20.692 "data_offset": 2048, 00:13:20.692 "data_size": 63488 00:13:20.692 } 00:13:20.692 ] 00:13:20.692 }' 00:13:20.692 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.692 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.692 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.692 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.692 08:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.261 [2024-12-13 08:24:33.605323] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.261 [2024-12-13 08:24:33.605424] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.261 [2024-12-13 08:24:33.605534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.830 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.830 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.831 "name": "raid_bdev1", 00:13:21.831 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:21.831 "strip_size_kb": 0, 00:13:21.831 "state": "online", 00:13:21.831 "raid_level": "raid1", 00:13:21.831 "superblock": true, 00:13:21.831 "num_base_bdevs": 2, 00:13:21.831 "num_base_bdevs_discovered": 2, 00:13:21.831 "num_base_bdevs_operational": 2, 00:13:21.831 "base_bdevs_list": [ 00:13:21.831 { 00:13:21.831 "name": "spare", 00:13:21.831 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:21.831 "is_configured": true, 00:13:21.831 "data_offset": 2048, 00:13:21.831 "data_size": 63488 00:13:21.831 }, 00:13:21.831 { 00:13:21.831 "name": "BaseBdev2", 00:13:21.831 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:21.831 "is_configured": true, 00:13:21.831 "data_offset": 2048, 00:13:21.831 "data_size": 63488 00:13:21.831 } 00:13:21.831 ] 00:13:21.831 }' 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:21.831 08:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.831 "name": "raid_bdev1", 00:13:21.831 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:21.831 "strip_size_kb": 0, 00:13:21.831 "state": "online", 00:13:21.831 "raid_level": "raid1", 00:13:21.831 "superblock": true, 00:13:21.831 "num_base_bdevs": 2, 00:13:21.831 "num_base_bdevs_discovered": 2, 00:13:21.831 "num_base_bdevs_operational": 2, 00:13:21.831 "base_bdevs_list": [ 00:13:21.831 { 00:13:21.831 "name": "spare", 00:13:21.831 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:21.831 "is_configured": true, 00:13:21.831 "data_offset": 2048, 00:13:21.831 "data_size": 63488 00:13:21.831 }, 00:13:21.831 { 00:13:21.831 "name": "BaseBdev2", 00:13:21.831 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:21.831 "is_configured": true, 00:13:21.831 "data_offset": 2048, 00:13:21.831 "data_size": 63488 00:13:21.831 } 00:13:21.831 ] 00:13:21.831 }' 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.831 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.091 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.091 "name": "raid_bdev1", 00:13:22.091 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:22.091 "strip_size_kb": 0, 00:13:22.091 "state": "online", 00:13:22.091 "raid_level": "raid1", 00:13:22.091 "superblock": true, 00:13:22.091 "num_base_bdevs": 2, 00:13:22.091 "num_base_bdevs_discovered": 2, 00:13:22.091 "num_base_bdevs_operational": 2, 00:13:22.091 "base_bdevs_list": [ 00:13:22.091 { 00:13:22.091 "name": "spare", 00:13:22.091 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:22.091 "is_configured": true, 00:13:22.091 "data_offset": 2048, 00:13:22.091 "data_size": 63488 00:13:22.091 }, 00:13:22.091 { 00:13:22.091 "name": "BaseBdev2", 00:13:22.091 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:22.091 "is_configured": true, 00:13:22.091 "data_offset": 2048, 00:13:22.091 "data_size": 63488 00:13:22.091 } 00:13:22.091 ] 00:13:22.091 }' 00:13:22.091 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.091 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.351 [2024-12-13 08:24:34.626715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.351 [2024-12-13 08:24:34.626755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.351 [2024-12-13 08:24:34.626834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.351 [2024-12-13 08:24:34.626903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.351 [2024-12-13 08:24:34.626913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.351 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:22.611 /dev/nbd0 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.611 1+0 records in 00:13:22.611 1+0 records out 00:13:22.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197557 s, 20.7 MB/s 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.611 08:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:22.871 /dev/nbd1 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.871 1+0 records in 00:13:22.871 1+0 records out 00:13:22.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492311 s, 8.3 MB/s 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.871 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:23.129 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:23.129 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.129 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.129 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.129 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:23.129 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.129 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.389 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.648 [2024-12-13 08:24:35.813367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:23.648 [2024-12-13 08:24:35.813458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.648 [2024-12-13 08:24:35.813480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:23.648 [2024-12-13 08:24:35.813490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.648 [2024-12-13 08:24:35.815610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.648 [2024-12-13 08:24:35.815649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:23.648 [2024-12-13 08:24:35.815747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:23.648 [2024-12-13 08:24:35.815802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.648 [2024-12-13 08:24:35.815944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.648 spare 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:23.648 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.649 [2024-12-13 08:24:35.915843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:23.649 [2024-12-13 08:24:35.915876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:23.649 [2024-12-13 08:24:35.916150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:23.649 [2024-12-13 08:24:35.916361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:23.649 [2024-12-13 08:24:35.916376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:23.649 [2024-12-13 08:24:35.916532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.649 "name": "raid_bdev1", 00:13:23.649 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:23.649 "strip_size_kb": 0, 00:13:23.649 "state": "online", 00:13:23.649 "raid_level": "raid1", 00:13:23.649 "superblock": true, 00:13:23.649 "num_base_bdevs": 2, 00:13:23.649 "num_base_bdevs_discovered": 2, 00:13:23.649 "num_base_bdevs_operational": 2, 00:13:23.649 "base_bdevs_list": [ 00:13:23.649 { 00:13:23.649 "name": "spare", 00:13:23.649 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:23.649 "is_configured": true, 00:13:23.649 "data_offset": 2048, 00:13:23.649 "data_size": 63488 00:13:23.649 }, 00:13:23.649 { 00:13:23.649 "name": "BaseBdev2", 00:13:23.649 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:23.649 "is_configured": true, 00:13:23.649 "data_offset": 2048, 00:13:23.649 "data_size": 63488 00:13:23.649 } 00:13:23.649 ] 00:13:23.649 }' 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.649 08:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.218 "name": "raid_bdev1", 00:13:24.218 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:24.218 "strip_size_kb": 0, 00:13:24.218 "state": "online", 00:13:24.218 "raid_level": "raid1", 00:13:24.218 "superblock": true, 00:13:24.218 "num_base_bdevs": 2, 00:13:24.218 "num_base_bdevs_discovered": 2, 00:13:24.218 "num_base_bdevs_operational": 2, 00:13:24.218 "base_bdevs_list": [ 00:13:24.218 { 00:13:24.218 "name": "spare", 00:13:24.218 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:24.218 "is_configured": true, 00:13:24.218 "data_offset": 2048, 00:13:24.218 "data_size": 63488 00:13:24.218 }, 00:13:24.218 { 00:13:24.218 "name": "BaseBdev2", 00:13:24.218 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:24.218 "is_configured": true, 00:13:24.218 "data_offset": 2048, 00:13:24.218 "data_size": 63488 00:13:24.218 } 00:13:24.218 ] 00:13:24.218 }' 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.218 [2024-12-13 08:24:36.556197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.218 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.478 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.478 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.478 "name": "raid_bdev1", 00:13:24.478 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:24.478 "strip_size_kb": 0, 00:13:24.478 "state": "online", 00:13:24.478 "raid_level": "raid1", 00:13:24.478 "superblock": true, 00:13:24.478 "num_base_bdevs": 2, 00:13:24.478 "num_base_bdevs_discovered": 1, 00:13:24.478 "num_base_bdevs_operational": 1, 00:13:24.478 "base_bdevs_list": [ 00:13:24.478 { 00:13:24.478 "name": null, 00:13:24.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.478 "is_configured": false, 00:13:24.478 "data_offset": 0, 00:13:24.478 "data_size": 63488 00:13:24.478 }, 00:13:24.478 { 00:13:24.478 "name": "BaseBdev2", 00:13:24.478 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:24.478 "is_configured": true, 00:13:24.478 "data_offset": 2048, 00:13:24.478 "data_size": 63488 00:13:24.478 } 00:13:24.478 ] 00:13:24.478 }' 00:13:24.478 08:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.478 08:24:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.738 08:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.738 08:24:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.738 08:24:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.738 [2024-12-13 08:24:37.015452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.738 [2024-12-13 08:24:37.015678] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:24.738 [2024-12-13 08:24:37.015702] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:24.738 [2024-12-13 08:24:37.015738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.738 [2024-12-13 08:24:37.031486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:24.738 08:24:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.738 08:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:24.738 [2024-12-13 08:24:37.033346] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.675 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.675 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.675 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.675 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.675 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.010 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.010 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.011 "name": "raid_bdev1", 00:13:26.011 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:26.011 "strip_size_kb": 0, 00:13:26.011 "state": "online", 00:13:26.011 "raid_level": "raid1", 00:13:26.011 "superblock": true, 00:13:26.011 "num_base_bdevs": 2, 00:13:26.011 "num_base_bdevs_discovered": 2, 00:13:26.011 "num_base_bdevs_operational": 2, 00:13:26.011 "process": { 00:13:26.011 "type": "rebuild", 00:13:26.011 "target": "spare", 00:13:26.011 "progress": { 00:13:26.011 "blocks": 20480, 00:13:26.011 "percent": 32 00:13:26.011 } 00:13:26.011 }, 00:13:26.011 "base_bdevs_list": [ 00:13:26.011 { 00:13:26.011 "name": "spare", 00:13:26.011 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:26.011 "is_configured": true, 00:13:26.011 "data_offset": 2048, 00:13:26.011 "data_size": 63488 00:13:26.011 }, 00:13:26.011 { 00:13:26.011 "name": "BaseBdev2", 00:13:26.011 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:26.011 "is_configured": true, 00:13:26.011 "data_offset": 2048, 00:13:26.011 "data_size": 63488 00:13:26.011 } 00:13:26.011 ] 00:13:26.011 }' 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.011 [2024-12-13 08:24:38.169178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.011 [2024-12-13 08:24:38.238674] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:26.011 [2024-12-13 08:24:38.238756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.011 [2024-12-13 08:24:38.238774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.011 [2024-12-13 08:24:38.238784] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.011 "name": "raid_bdev1", 00:13:26.011 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:26.011 "strip_size_kb": 0, 00:13:26.011 "state": "online", 00:13:26.011 "raid_level": "raid1", 00:13:26.011 "superblock": true, 00:13:26.011 "num_base_bdevs": 2, 00:13:26.011 "num_base_bdevs_discovered": 1, 00:13:26.011 "num_base_bdevs_operational": 1, 00:13:26.011 "base_bdevs_list": [ 00:13:26.011 { 00:13:26.011 "name": null, 00:13:26.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.011 "is_configured": false, 00:13:26.011 "data_offset": 0, 00:13:26.011 "data_size": 63488 00:13:26.011 }, 00:13:26.011 { 00:13:26.011 "name": "BaseBdev2", 00:13:26.011 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:26.011 "is_configured": true, 00:13:26.011 "data_offset": 2048, 00:13:26.011 "data_size": 63488 00:13:26.011 } 00:13:26.011 ] 00:13:26.011 }' 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.011 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.614 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:26.614 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.614 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.614 [2024-12-13 08:24:38.762484] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:26.614 [2024-12-13 08:24:38.762555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.614 [2024-12-13 08:24:38.762576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:26.614 [2024-12-13 08:24:38.762588] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.614 [2024-12-13 08:24:38.763097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.614 [2024-12-13 08:24:38.763144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:26.614 [2024-12-13 08:24:38.763250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:26.614 [2024-12-13 08:24:38.763275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:26.614 [2024-12-13 08:24:38.763285] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.614 [2024-12-13 08:24:38.763316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.614 [2024-12-13 08:24:38.780326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:26.614 spare 00:13:26.614 08:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.614 08:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:26.615 [2024-12-13 08:24:38.782299] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.552 "name": "raid_bdev1", 00:13:27.552 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:27.552 "strip_size_kb": 0, 00:13:27.552 "state": "online", 00:13:27.552 "raid_level": "raid1", 00:13:27.552 "superblock": true, 00:13:27.552 "num_base_bdevs": 2, 00:13:27.552 "num_base_bdevs_discovered": 2, 00:13:27.552 "num_base_bdevs_operational": 2, 00:13:27.552 "process": { 00:13:27.552 "type": "rebuild", 00:13:27.552 "target": "spare", 00:13:27.552 "progress": { 00:13:27.552 "blocks": 20480, 00:13:27.552 "percent": 32 00:13:27.552 } 00:13:27.552 }, 00:13:27.552 "base_bdevs_list": [ 00:13:27.552 { 00:13:27.552 "name": "spare", 00:13:27.552 "uuid": "9a92d121-fedd-553b-ab1a-6d1ab301f3ad", 00:13:27.552 "is_configured": true, 00:13:27.552 "data_offset": 2048, 00:13:27.552 "data_size": 63488 00:13:27.552 }, 00:13:27.552 { 00:13:27.552 "name": "BaseBdev2", 00:13:27.552 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:27.552 "is_configured": true, 00:13:27.552 "data_offset": 2048, 00:13:27.552 "data_size": 63488 00:13:27.552 } 00:13:27.552 ] 00:13:27.552 }' 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.552 08:24:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.813 [2024-12-13 08:24:39.921769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.813 [2024-12-13 08:24:39.988077] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.813 [2024-12-13 08:24:39.988184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.813 [2024-12-13 08:24:39.988202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.813 [2024-12-13 08:24:39.988210] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.813 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.813 "name": "raid_bdev1", 00:13:27.813 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:27.813 "strip_size_kb": 0, 00:13:27.813 "state": "online", 00:13:27.813 "raid_level": "raid1", 00:13:27.813 "superblock": true, 00:13:27.813 "num_base_bdevs": 2, 00:13:27.813 "num_base_bdevs_discovered": 1, 00:13:27.813 "num_base_bdevs_operational": 1, 00:13:27.814 "base_bdevs_list": [ 00:13:27.814 { 00:13:27.814 "name": null, 00:13:27.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.814 "is_configured": false, 00:13:27.814 "data_offset": 0, 00:13:27.814 "data_size": 63488 00:13:27.814 }, 00:13:27.814 { 00:13:27.814 "name": "BaseBdev2", 00:13:27.814 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:27.814 "is_configured": true, 00:13:27.814 "data_offset": 2048, 00:13:27.814 "data_size": 63488 00:13:27.814 } 00:13:27.814 ] 00:13:27.814 }' 00:13:27.814 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.814 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.073 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.332 "name": "raid_bdev1", 00:13:28.332 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:28.332 "strip_size_kb": 0, 00:13:28.332 "state": "online", 00:13:28.332 "raid_level": "raid1", 00:13:28.332 "superblock": true, 00:13:28.332 "num_base_bdevs": 2, 00:13:28.332 "num_base_bdevs_discovered": 1, 00:13:28.332 "num_base_bdevs_operational": 1, 00:13:28.332 "base_bdevs_list": [ 00:13:28.332 { 00:13:28.332 "name": null, 00:13:28.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.332 "is_configured": false, 00:13:28.332 "data_offset": 0, 00:13:28.332 "data_size": 63488 00:13:28.332 }, 00:13:28.332 { 00:13:28.332 "name": "BaseBdev2", 00:13:28.332 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:28.332 "is_configured": true, 00:13:28.332 "data_offset": 2048, 00:13:28.332 "data_size": 63488 00:13:28.332 } 00:13:28.332 ] 00:13:28.332 }' 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.332 [2024-12-13 08:24:40.581574] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.332 [2024-12-13 08:24:40.581638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.332 [2024-12-13 08:24:40.581660] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:28.332 [2024-12-13 08:24:40.581680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.332 [2024-12-13 08:24:40.582162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.332 [2024-12-13 08:24:40.582181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.332 [2024-12-13 08:24:40.582267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:28.332 [2024-12-13 08:24:40.582281] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:28.332 [2024-12-13 08:24:40.582290] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:28.332 [2024-12-13 08:24:40.582300] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:28.332 BaseBdev1 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.332 08:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.269 08:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.528 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.528 "name": "raid_bdev1", 00:13:29.528 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:29.528 "strip_size_kb": 0, 00:13:29.528 "state": "online", 00:13:29.528 "raid_level": "raid1", 00:13:29.528 "superblock": true, 00:13:29.528 "num_base_bdevs": 2, 00:13:29.528 "num_base_bdevs_discovered": 1, 00:13:29.528 "num_base_bdevs_operational": 1, 00:13:29.528 "base_bdevs_list": [ 00:13:29.528 { 00:13:29.528 "name": null, 00:13:29.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.528 "is_configured": false, 00:13:29.528 "data_offset": 0, 00:13:29.528 "data_size": 63488 00:13:29.528 }, 00:13:29.528 { 00:13:29.528 "name": "BaseBdev2", 00:13:29.528 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:29.528 "is_configured": true, 00:13:29.528 "data_offset": 2048, 00:13:29.528 "data_size": 63488 00:13:29.528 } 00:13:29.528 ] 00:13:29.528 }' 00:13:29.528 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.528 08:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.786 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.786 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.786 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.786 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.786 08:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.786 "name": "raid_bdev1", 00:13:29.786 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:29.786 "strip_size_kb": 0, 00:13:29.786 "state": "online", 00:13:29.786 "raid_level": "raid1", 00:13:29.786 "superblock": true, 00:13:29.786 "num_base_bdevs": 2, 00:13:29.786 "num_base_bdevs_discovered": 1, 00:13:29.786 "num_base_bdevs_operational": 1, 00:13:29.786 "base_bdevs_list": [ 00:13:29.786 { 00:13:29.786 "name": null, 00:13:29.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.786 "is_configured": false, 00:13:29.786 "data_offset": 0, 00:13:29.786 "data_size": 63488 00:13:29.786 }, 00:13:29.786 { 00:13:29.786 "name": "BaseBdev2", 00:13:29.786 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:29.786 "is_configured": true, 00:13:29.786 "data_offset": 2048, 00:13:29.786 "data_size": 63488 00:13:29.786 } 00:13:29.786 ] 00:13:29.786 }' 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.786 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.044 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.044 [2024-12-13 08:24:42.167019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.045 [2024-12-13 08:24:42.167274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:30.045 [2024-12-13 08:24:42.167354] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.045 request: 00:13:30.045 { 00:13:30.045 "base_bdev": "BaseBdev1", 00:13:30.045 "raid_bdev": "raid_bdev1", 00:13:30.045 "method": "bdev_raid_add_base_bdev", 00:13:30.045 "req_id": 1 00:13:30.045 } 00:13:30.045 Got JSON-RPC error response 00:13:30.045 response: 00:13:30.045 { 00:13:30.045 "code": -22, 00:13:30.045 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:30.045 } 00:13:30.045 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:30.045 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:30.045 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.045 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.045 08:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.045 08:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.982 "name": "raid_bdev1", 00:13:30.982 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:30.982 "strip_size_kb": 0, 00:13:30.982 "state": "online", 00:13:30.982 "raid_level": "raid1", 00:13:30.982 "superblock": true, 00:13:30.982 "num_base_bdevs": 2, 00:13:30.982 "num_base_bdevs_discovered": 1, 00:13:30.982 "num_base_bdevs_operational": 1, 00:13:30.982 "base_bdevs_list": [ 00:13:30.982 { 00:13:30.982 "name": null, 00:13:30.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.982 "is_configured": false, 00:13:30.982 "data_offset": 0, 00:13:30.982 "data_size": 63488 00:13:30.982 }, 00:13:30.982 { 00:13:30.982 "name": "BaseBdev2", 00:13:30.982 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:30.982 "is_configured": true, 00:13:30.982 "data_offset": 2048, 00:13:30.982 "data_size": 63488 00:13:30.982 } 00:13:30.982 ] 00:13:30.982 }' 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.982 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.241 "name": "raid_bdev1", 00:13:31.241 "uuid": "b058d5f5-838e-43bd-a441-f991768d1170", 00:13:31.241 "strip_size_kb": 0, 00:13:31.241 "state": "online", 00:13:31.241 "raid_level": "raid1", 00:13:31.241 "superblock": true, 00:13:31.241 "num_base_bdevs": 2, 00:13:31.241 "num_base_bdevs_discovered": 1, 00:13:31.241 "num_base_bdevs_operational": 1, 00:13:31.241 "base_bdevs_list": [ 00:13:31.241 { 00:13:31.241 "name": null, 00:13:31.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.241 "is_configured": false, 00:13:31.241 "data_offset": 0, 00:13:31.241 "data_size": 63488 00:13:31.241 }, 00:13:31.241 { 00:13:31.241 "name": "BaseBdev2", 00:13:31.241 "uuid": "3e743792-fc80-5de1-b025-76359cb41d02", 00:13:31.241 "is_configured": true, 00:13:31.241 "data_offset": 2048, 00:13:31.241 "data_size": 63488 00:13:31.241 } 00:13:31.241 ] 00:13:31.241 }' 00:13:31.241 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75919 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75919 ']' 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75919 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75919 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.501 killing process with pid 75919 00:13:31.501 Received shutdown signal, test time was about 60.000000 seconds 00:13:31.501 00:13:31.501 Latency(us) 00:13:31.501 [2024-12-13T08:24:43.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.501 [2024-12-13T08:24:43.866Z] =================================================================================================================== 00:13:31.501 [2024-12-13T08:24:43.866Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75919' 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75919 00:13:31.501 [2024-12-13 08:24:43.747413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.501 [2024-12-13 08:24:43.747537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.501 [2024-12-13 08:24:43.747592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.501 [2024-12-13 08:24:43.747620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:31.501 08:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75919 00:13:31.759 [2024-12-13 08:24:44.052548] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.135 00:13:33.135 real 0m23.176s 00:13:33.135 user 0m28.172s 00:13:33.135 sys 0m3.684s 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.135 ************************************ 00:13:33.135 END TEST raid_rebuild_test_sb 00:13:33.135 ************************************ 00:13:33.135 08:24:45 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:33.135 08:24:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:33.135 08:24:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.135 08:24:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.135 ************************************ 00:13:33.135 START TEST raid_rebuild_test_io 00:13:33.135 ************************************ 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:33.135 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76643 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76643 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76643 ']' 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.136 08:24:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.136 [2024-12-13 08:24:45.341447] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:13:33.136 [2024-12-13 08:24:45.341689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.136 Zero copy mechanism will not be used. 00:13:33.136 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76643 ] 00:13:33.393 [2024-12-13 08:24:45.515729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.394 [2024-12-13 08:24:45.636024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.652 [2024-12-13 08:24:45.836438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.652 [2024-12-13 08:24:45.836581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.910 BaseBdev1_malloc 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.910 [2024-12-13 08:24:46.226961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:33.910 [2024-12-13 08:24:46.227024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.910 [2024-12-13 08:24:46.227046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:33.910 [2024-12-13 08:24:46.227057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.910 [2024-12-13 08:24:46.229259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.910 [2024-12-13 08:24:46.229299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.910 BaseBdev1 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.910 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.169 BaseBdev2_malloc 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.170 [2024-12-13 08:24:46.281487] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:34.170 [2024-12-13 08:24:46.281550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.170 [2024-12-13 08:24:46.281570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.170 [2024-12-13 08:24:46.281582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.170 [2024-12-13 08:24:46.283696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.170 [2024-12-13 08:24:46.283736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.170 BaseBdev2 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.170 spare_malloc 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.170 spare_delay 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.170 [2024-12-13 08:24:46.359877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.170 [2024-12-13 08:24:46.359936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.170 [2024-12-13 08:24:46.359972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:34.170 [2024-12-13 08:24:46.359983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.170 [2024-12-13 08:24:46.362072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.170 [2024-12-13 08:24:46.362121] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.170 spare 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.170 [2024-12-13 08:24:46.371905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.170 [2024-12-13 08:24:46.373727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.170 [2024-12-13 08:24:46.373812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.170 [2024-12-13 08:24:46.373825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:34.170 [2024-12-13 08:24:46.374060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:34.170 [2024-12-13 08:24:46.374228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.170 [2024-12-13 08:24:46.374240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.170 [2024-12-13 08:24:46.374405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.170 "name": "raid_bdev1", 00:13:34.170 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:34.170 "strip_size_kb": 0, 00:13:34.170 "state": "online", 00:13:34.170 "raid_level": "raid1", 00:13:34.170 "superblock": false, 00:13:34.170 "num_base_bdevs": 2, 00:13:34.170 "num_base_bdevs_discovered": 2, 00:13:34.170 "num_base_bdevs_operational": 2, 00:13:34.170 "base_bdevs_list": [ 00:13:34.170 { 00:13:34.170 "name": "BaseBdev1", 00:13:34.170 "uuid": "3cef7109-a0e0-5738-98e7-c94784ffaf05", 00:13:34.170 "is_configured": true, 00:13:34.170 "data_offset": 0, 00:13:34.170 "data_size": 65536 00:13:34.170 }, 00:13:34.170 { 00:13:34.170 "name": "BaseBdev2", 00:13:34.170 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:34.170 "is_configured": true, 00:13:34.170 "data_offset": 0, 00:13:34.170 "data_size": 65536 00:13:34.170 } 00:13:34.170 ] 00:13:34.170 }' 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.170 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:34.732 [2024-12-13 08:24:46.847459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.732 [2024-12-13 08:24:46.935006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.732 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.732 "name": "raid_bdev1", 00:13:34.732 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:34.732 "strip_size_kb": 0, 00:13:34.732 "state": "online", 00:13:34.732 "raid_level": "raid1", 00:13:34.732 "superblock": false, 00:13:34.732 "num_base_bdevs": 2, 00:13:34.732 "num_base_bdevs_discovered": 1, 00:13:34.733 "num_base_bdevs_operational": 1, 00:13:34.733 "base_bdevs_list": [ 00:13:34.733 { 00:13:34.733 "name": null, 00:13:34.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.733 "is_configured": false, 00:13:34.733 "data_offset": 0, 00:13:34.733 "data_size": 65536 00:13:34.733 }, 00:13:34.733 { 00:13:34.733 "name": "BaseBdev2", 00:13:34.733 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:34.733 "is_configured": true, 00:13:34.733 "data_offset": 0, 00:13:34.733 "data_size": 65536 00:13:34.733 } 00:13:34.733 ] 00:13:34.733 }' 00:13:34.733 08:24:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.733 08:24:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.733 [2024-12-13 08:24:47.038725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:34.733 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.733 Zero copy mechanism will not be used. 00:13:34.733 Running I/O for 60 seconds... 00:13:35.298 08:24:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.298 08:24:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.298 08:24:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.298 [2024-12-13 08:24:47.411899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.298 08:24:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.298 08:24:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:35.298 [2024-12-13 08:24:47.480897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:35.298 [2024-12-13 08:24:47.482819] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.298 [2024-12-13 08:24:47.603893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.298 [2024-12-13 08:24:47.604538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.555 [2024-12-13 08:24:47.821146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:35.555 [2024-12-13 08:24:47.821540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:35.813 186.00 IOPS, 558.00 MiB/s [2024-12-13T08:24:48.178Z] [2024-12-13 08:24:48.154635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:35.813 [2024-12-13 08:24:48.155081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.071 [2024-12-13 08:24:48.369442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.071 [2024-12-13 08:24:48.369787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.329 "name": "raid_bdev1", 00:13:36.329 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:36.329 "strip_size_kb": 0, 00:13:36.329 "state": "online", 00:13:36.329 "raid_level": "raid1", 00:13:36.329 "superblock": false, 00:13:36.329 "num_base_bdevs": 2, 00:13:36.329 "num_base_bdevs_discovered": 2, 00:13:36.329 "num_base_bdevs_operational": 2, 00:13:36.329 "process": { 00:13:36.329 "type": "rebuild", 00:13:36.329 "target": "spare", 00:13:36.329 "progress": { 00:13:36.329 "blocks": 10240, 00:13:36.329 "percent": 15 00:13:36.329 } 00:13:36.329 }, 00:13:36.329 "base_bdevs_list": [ 00:13:36.329 { 00:13:36.329 "name": "spare", 00:13:36.329 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:36.329 "is_configured": true, 00:13:36.329 "data_offset": 0, 00:13:36.329 "data_size": 65536 00:13:36.329 }, 00:13:36.329 { 00:13:36.329 "name": "BaseBdev2", 00:13:36.329 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:36.329 "is_configured": true, 00:13:36.329 "data_offset": 0, 00:13:36.329 "data_size": 65536 00:13:36.329 } 00:13:36.329 ] 00:13:36.329 }' 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.329 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.329 [2024-12-13 08:24:48.630776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.587 [2024-12-13 08:24:48.700818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:36.587 [2024-12-13 08:24:48.813111] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.587 [2024-12-13 08:24:48.822538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.587 [2024-12-13 08:24:48.822653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.587 [2024-12-13 08:24:48.822686] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.587 [2024-12-13 08:24:48.873184] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.587 "name": "raid_bdev1", 00:13:36.587 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:36.587 "strip_size_kb": 0, 00:13:36.587 "state": "online", 00:13:36.587 "raid_level": "raid1", 00:13:36.587 "superblock": false, 00:13:36.587 "num_base_bdevs": 2, 00:13:36.587 "num_base_bdevs_discovered": 1, 00:13:36.587 "num_base_bdevs_operational": 1, 00:13:36.587 "base_bdevs_list": [ 00:13:36.587 { 00:13:36.587 "name": null, 00:13:36.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.587 "is_configured": false, 00:13:36.587 "data_offset": 0, 00:13:36.587 "data_size": 65536 00:13:36.587 }, 00:13:36.587 { 00:13:36.587 "name": "BaseBdev2", 00:13:36.587 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:36.587 "is_configured": true, 00:13:36.587 "data_offset": 0, 00:13:36.587 "data_size": 65536 00:13:36.587 } 00:13:36.587 ] 00:13:36.587 }' 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.587 08:24:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.103 128.50 IOPS, 385.50 MiB/s [2024-12-13T08:24:49.468Z] 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.103 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.103 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.103 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.103 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.103 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.103 08:24:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.103 08:24:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.104 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.104 08:24:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.104 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.104 "name": "raid_bdev1", 00:13:37.104 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:37.104 "strip_size_kb": 0, 00:13:37.104 "state": "online", 00:13:37.104 "raid_level": "raid1", 00:13:37.104 "superblock": false, 00:13:37.104 "num_base_bdevs": 2, 00:13:37.104 "num_base_bdevs_discovered": 1, 00:13:37.104 "num_base_bdevs_operational": 1, 00:13:37.104 "base_bdevs_list": [ 00:13:37.104 { 00:13:37.104 "name": null, 00:13:37.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.104 "is_configured": false, 00:13:37.104 "data_offset": 0, 00:13:37.104 "data_size": 65536 00:13:37.104 }, 00:13:37.104 { 00:13:37.104 "name": "BaseBdev2", 00:13:37.104 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:37.104 "is_configured": true, 00:13:37.104 "data_offset": 0, 00:13:37.104 "data_size": 65536 00:13:37.104 } 00:13:37.104 ] 00:13:37.104 }' 00:13:37.104 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.104 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.104 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.362 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.362 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.362 08:24:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.362 08:24:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.362 [2024-12-13 08:24:49.498562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.362 08:24:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.362 08:24:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:37.362 [2024-12-13 08:24:49.561688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:37.362 [2024-12-13 08:24:49.563907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.362 [2024-12-13 08:24:49.673170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:37.362 [2024-12-13 08:24:49.673764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:37.620 [2024-12-13 08:24:49.895055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:37.620 [2024-12-13 08:24:49.895453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:37.878 143.67 IOPS, 431.00 MiB/s [2024-12-13T08:24:50.243Z] [2024-12-13 08:24:50.148143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.878 [2024-12-13 08:24:50.148690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:38.137 [2024-12-13 08:24:50.357373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.395 "name": "raid_bdev1", 00:13:38.395 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:38.395 "strip_size_kb": 0, 00:13:38.395 "state": "online", 00:13:38.395 "raid_level": "raid1", 00:13:38.395 "superblock": false, 00:13:38.395 "num_base_bdevs": 2, 00:13:38.395 "num_base_bdevs_discovered": 2, 00:13:38.395 "num_base_bdevs_operational": 2, 00:13:38.395 "process": { 00:13:38.395 "type": "rebuild", 00:13:38.395 "target": "spare", 00:13:38.395 "progress": { 00:13:38.395 "blocks": 12288, 00:13:38.395 "percent": 18 00:13:38.395 } 00:13:38.395 }, 00:13:38.395 "base_bdevs_list": [ 00:13:38.395 { 00:13:38.395 "name": "spare", 00:13:38.395 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:38.395 "is_configured": true, 00:13:38.395 "data_offset": 0, 00:13:38.395 "data_size": 65536 00:13:38.395 }, 00:13:38.395 { 00:13:38.395 "name": "BaseBdev2", 00:13:38.395 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:38.395 "is_configured": true, 00:13:38.395 "data_offset": 0, 00:13:38.395 "data_size": 65536 00:13:38.395 } 00:13:38.395 ] 00:13:38.395 }' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.395 "name": "raid_bdev1", 00:13:38.395 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:38.395 "strip_size_kb": 0, 00:13:38.395 "state": "online", 00:13:38.395 "raid_level": "raid1", 00:13:38.395 "superblock": false, 00:13:38.395 "num_base_bdevs": 2, 00:13:38.395 "num_base_bdevs_discovered": 2, 00:13:38.395 "num_base_bdevs_operational": 2, 00:13:38.395 "process": { 00:13:38.395 "type": "rebuild", 00:13:38.395 "target": "spare", 00:13:38.395 "progress": { 00:13:38.395 "blocks": 14336, 00:13:38.395 "percent": 21 00:13:38.395 } 00:13:38.395 }, 00:13:38.395 "base_bdevs_list": [ 00:13:38.395 { 00:13:38.395 "name": "spare", 00:13:38.395 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:38.395 "is_configured": true, 00:13:38.395 "data_offset": 0, 00:13:38.395 "data_size": 65536 00:13:38.395 }, 00:13:38.395 { 00:13:38.395 "name": "BaseBdev2", 00:13:38.395 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:38.395 "is_configured": true, 00:13:38.395 "data_offset": 0, 00:13:38.395 "data_size": 65536 00:13:38.395 } 00:13:38.395 ] 00:13:38.395 }' 00:13:38.395 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.654 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.654 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.654 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.654 08:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.654 [2024-12-13 08:24:50.932591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:38.912 126.25 IOPS, 378.75 MiB/s [2024-12-13T08:24:51.277Z] [2024-12-13 08:24:51.051579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:39.179 [2024-12-13 08:24:51.284238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:39.179 [2024-12-13 08:24:51.284964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:39.179 [2024-12-13 08:24:51.494980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:39.179 [2024-12-13 08:24:51.495473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:39.755 [2024-12-13 08:24:51.813163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:39.755 [2024-12-13 08:24:51.813864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.755 "name": "raid_bdev1", 00:13:39.755 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:39.755 "strip_size_kb": 0, 00:13:39.755 "state": "online", 00:13:39.755 "raid_level": "raid1", 00:13:39.755 "superblock": false, 00:13:39.755 "num_base_bdevs": 2, 00:13:39.755 "num_base_bdevs_discovered": 2, 00:13:39.755 "num_base_bdevs_operational": 2, 00:13:39.755 "process": { 00:13:39.755 "type": "rebuild", 00:13:39.755 "target": "spare", 00:13:39.755 "progress": { 00:13:39.755 "blocks": 32768, 00:13:39.755 "percent": 50 00:13:39.755 } 00:13:39.755 }, 00:13:39.755 "base_bdevs_list": [ 00:13:39.755 { 00:13:39.755 "name": "spare", 00:13:39.755 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:39.755 "is_configured": true, 00:13:39.755 "data_offset": 0, 00:13:39.755 "data_size": 65536 00:13:39.755 }, 00:13:39.755 { 00:13:39.755 "name": "BaseBdev2", 00:13:39.755 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:39.755 "is_configured": true, 00:13:39.755 "data_offset": 0, 00:13:39.755 "data_size": 65536 00:13:39.755 } 00:13:39.755 ] 00:13:39.755 }' 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.755 [2024-12-13 08:24:51.922319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:39.755 [2024-12-13 08:24:51.922733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.755 08:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.014 111.60 IOPS, 334.80 MiB/s [2024-12-13T08:24:52.379Z] [2024-12-13 08:24:52.134669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:40.014 [2024-12-13 08:24:52.135335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:40.582 [2024-12-13 08:24:52.666975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.842 08:24:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.842 08:24:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.842 08:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.842 "name": "raid_bdev1", 00:13:40.842 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:40.842 "strip_size_kb": 0, 00:13:40.842 "state": "online", 00:13:40.842 "raid_level": "raid1", 00:13:40.842 "superblock": false, 00:13:40.842 "num_base_bdevs": 2, 00:13:40.842 "num_base_bdevs_discovered": 2, 00:13:40.842 "num_base_bdevs_operational": 2, 00:13:40.842 "process": { 00:13:40.842 "type": "rebuild", 00:13:40.842 "target": "spare", 00:13:40.842 "progress": { 00:13:40.842 "blocks": 51200, 00:13:40.842 "percent": 78 00:13:40.842 } 00:13:40.842 }, 00:13:40.842 "base_bdevs_list": [ 00:13:40.842 { 00:13:40.842 "name": "spare", 00:13:40.842 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:40.842 "is_configured": true, 00:13:40.842 "data_offset": 0, 00:13:40.842 "data_size": 65536 00:13:40.842 }, 00:13:40.842 { 00:13:40.842 "name": "BaseBdev2", 00:13:40.842 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:40.842 "is_configured": true, 00:13:40.842 "data_offset": 0, 00:13:40.842 "data_size": 65536 00:13:40.842 } 00:13:40.842 ] 00:13:40.842 }' 00:13:40.842 08:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.842 101.83 IOPS, 305.50 MiB/s [2024-12-13T08:24:53.207Z] 08:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.842 08:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.842 08:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.842 08:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.412 [2024-12-13 08:24:53.604232] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:41.412 [2024-12-13 08:24:53.659675] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:41.412 [2024-12-13 08:24:53.662118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.982 91.71 IOPS, 275.14 MiB/s [2024-12-13T08:24:54.347Z] 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.982 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.983 "name": "raid_bdev1", 00:13:41.983 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:41.983 "strip_size_kb": 0, 00:13:41.983 "state": "online", 00:13:41.983 "raid_level": "raid1", 00:13:41.983 "superblock": false, 00:13:41.983 "num_base_bdevs": 2, 00:13:41.983 "num_base_bdevs_discovered": 2, 00:13:41.983 "num_base_bdevs_operational": 2, 00:13:41.983 "base_bdevs_list": [ 00:13:41.983 { 00:13:41.983 "name": "spare", 00:13:41.983 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:41.983 "is_configured": true, 00:13:41.983 "data_offset": 0, 00:13:41.983 "data_size": 65536 00:13:41.983 }, 00:13:41.983 { 00:13:41.983 "name": "BaseBdev2", 00:13:41.983 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:41.983 "is_configured": true, 00:13:41.983 "data_offset": 0, 00:13:41.983 "data_size": 65536 00:13:41.983 } 00:13:41.983 ] 00:13:41.983 }' 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.983 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.983 "name": "raid_bdev1", 00:13:41.983 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:41.983 "strip_size_kb": 0, 00:13:41.983 "state": "online", 00:13:41.983 "raid_level": "raid1", 00:13:41.983 "superblock": false, 00:13:41.983 "num_base_bdevs": 2, 00:13:41.983 "num_base_bdevs_discovered": 2, 00:13:41.983 "num_base_bdevs_operational": 2, 00:13:41.983 "base_bdevs_list": [ 00:13:41.983 { 00:13:41.983 "name": "spare", 00:13:41.983 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:41.983 "is_configured": true, 00:13:41.983 "data_offset": 0, 00:13:41.983 "data_size": 65536 00:13:41.983 }, 00:13:41.983 { 00:13:41.983 "name": "BaseBdev2", 00:13:41.983 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:41.983 "is_configured": true, 00:13:41.983 "data_offset": 0, 00:13:41.983 "data_size": 65536 00:13:41.983 } 00:13:41.983 ] 00:13:41.983 }' 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.243 "name": "raid_bdev1", 00:13:42.243 "uuid": "f0bf0a72-a608-43e7-aa2c-572a4e5e225c", 00:13:42.243 "strip_size_kb": 0, 00:13:42.243 "state": "online", 00:13:42.243 "raid_level": "raid1", 00:13:42.243 "superblock": false, 00:13:42.243 "num_base_bdevs": 2, 00:13:42.243 "num_base_bdevs_discovered": 2, 00:13:42.243 "num_base_bdevs_operational": 2, 00:13:42.243 "base_bdevs_list": [ 00:13:42.243 { 00:13:42.243 "name": "spare", 00:13:42.243 "uuid": "894533b8-e7c3-5471-9c0f-69195d15233f", 00:13:42.243 "is_configured": true, 00:13:42.243 "data_offset": 0, 00:13:42.243 "data_size": 65536 00:13:42.243 }, 00:13:42.243 { 00:13:42.243 "name": "BaseBdev2", 00:13:42.243 "uuid": "6e2b8357-200c-565e-a59f-17cb08bb4d7f", 00:13:42.243 "is_configured": true, 00:13:42.243 "data_offset": 0, 00:13:42.243 "data_size": 65536 00:13:42.243 } 00:13:42.243 ] 00:13:42.243 }' 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.243 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.503 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.503 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.503 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.503 [2024-12-13 08:24:54.860309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.503 [2024-12-13 08:24:54.860425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.763 00:13:42.763 Latency(us) 00:13:42.763 [2024-12-13T08:24:55.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.763 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:42.763 raid_bdev1 : 7.93 84.57 253.70 0.00 0.00 14816.66 341.63 137368.03 00:13:42.763 [2024-12-13T08:24:55.128Z] =================================================================================================================== 00:13:42.763 [2024-12-13T08:24:55.128Z] Total : 84.57 253.70 0.00 0.00 14816.66 341.63 137368.03 00:13:42.763 [2024-12-13 08:24:54.983078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.763 [2024-12-13 08:24:54.983256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.763 [2024-12-13 08:24:54.983368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.763 [2024-12-13 08:24:54.983425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:42.763 { 00:13:42.763 "results": [ 00:13:42.763 { 00:13:42.763 "job": "raid_bdev1", 00:13:42.763 "core_mask": "0x1", 00:13:42.763 "workload": "randrw", 00:13:42.763 "percentage": 50, 00:13:42.763 "status": "finished", 00:13:42.763 "queue_depth": 2, 00:13:42.763 "io_size": 3145728, 00:13:42.763 "runtime": 7.934535, 00:13:42.763 "iops": 84.56702251612728, 00:13:42.763 "mibps": 253.70106754838184, 00:13:42.763 "io_failed": 0, 00:13:42.763 "io_timeout": 0, 00:13:42.763 "avg_latency_us": 14816.659371725704, 00:13:42.763 "min_latency_us": 341.63144104803496, 00:13:42.763 "max_latency_us": 137368.03493449782 00:13:42.763 } 00:13:42.763 ], 00:13:42.763 "core_count": 1 00:13:42.763 } 00:13:42.763 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.763 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.763 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.763 08:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:42.763 08:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.763 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:43.022 /dev/nbd0 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.022 1+0 records in 00:13:43.022 1+0 records out 00:13:43.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392084 s, 10.4 MB/s 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.022 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:43.282 /dev/nbd1 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.282 1+0 records in 00:13:43.282 1+0 records out 00:13:43.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275071 s, 14.9 MB/s 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.282 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:43.542 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:43.542 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.542 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:43.542 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.542 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:43.542 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.542 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.801 08:24:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76643 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76643 ']' 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76643 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76643 00:13:44.061 killing process with pid 76643 00:13:44.061 Received shutdown signal, test time was about 9.241716 seconds 00:13:44.061 00:13:44.061 Latency(us) 00:13:44.061 [2024-12-13T08:24:56.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.061 [2024-12-13T08:24:56.426Z] =================================================================================================================== 00:13:44.061 [2024-12-13T08:24:56.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76643' 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76643 00:13:44.061 [2024-12-13 08:24:56.264734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.061 08:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76643 00:13:44.328 [2024-12-13 08:24:56.499772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:45.718 00:13:45.718 real 0m12.447s 00:13:45.718 user 0m15.677s 00:13:45.718 sys 0m1.534s 00:13:45.718 ************************************ 00:13:45.718 END TEST raid_rebuild_test_io 00:13:45.718 ************************************ 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.718 08:24:57 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:45.718 08:24:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:45.718 08:24:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.718 08:24:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.718 ************************************ 00:13:45.718 START TEST raid_rebuild_test_sb_io 00:13:45.718 ************************************ 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:45.718 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77024 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77024 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77024 ']' 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.719 08:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.719 [2024-12-13 08:24:57.869859] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:13:45.719 [2024-12-13 08:24:57.870071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77024 ] 00:13:45.719 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:45.719 Zero copy mechanism will not be used. 00:13:45.719 [2024-12-13 08:24:58.049495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.977 [2024-12-13 08:24:58.178143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.237 [2024-12-13 08:24:58.381546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.237 [2024-12-13 08:24:58.381686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.497 BaseBdev1_malloc 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.497 [2024-12-13 08:24:58.777874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:46.497 [2024-12-13 08:24:58.778004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.497 [2024-12-13 08:24:58.778046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:46.497 [2024-12-13 08:24:58.778078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.497 [2024-12-13 08:24:58.780386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.497 [2024-12-13 08:24:58.780466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.497 BaseBdev1 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.497 BaseBdev2_malloc 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.497 [2024-12-13 08:24:58.834469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:46.497 [2024-12-13 08:24:58.834536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.497 [2024-12-13 08:24:58.834557] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:46.497 [2024-12-13 08:24:58.834568] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.497 [2024-12-13 08:24:58.836940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.497 [2024-12-13 08:24:58.836981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.497 BaseBdev2 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.497 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.756 spare_malloc 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.756 spare_delay 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.756 [2024-12-13 08:24:58.918168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:46.756 [2024-12-13 08:24:58.918295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.756 [2024-12-13 08:24:58.918357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:46.756 [2024-12-13 08:24:58.918397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.756 [2024-12-13 08:24:58.920855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.756 [2024-12-13 08:24:58.920898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:46.756 spare 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.756 [2024-12-13 08:24:58.930223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.756 [2024-12-13 08:24:58.932470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.756 [2024-12-13 08:24:58.932749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:46.756 [2024-12-13 08:24:58.932773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.756 [2024-12-13 08:24:58.933072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:46.756 [2024-12-13 08:24:58.933298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:46.756 [2024-12-13 08:24:58.933316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:46.756 [2024-12-13 08:24:58.933493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.756 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.756 "name": "raid_bdev1", 00:13:46.756 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:46.756 "strip_size_kb": 0, 00:13:46.756 "state": "online", 00:13:46.757 "raid_level": "raid1", 00:13:46.757 "superblock": true, 00:13:46.757 "num_base_bdevs": 2, 00:13:46.757 "num_base_bdevs_discovered": 2, 00:13:46.757 "num_base_bdevs_operational": 2, 00:13:46.757 "base_bdevs_list": [ 00:13:46.757 { 00:13:46.757 "name": "BaseBdev1", 00:13:46.757 "uuid": "935e849a-32ef-5873-a326-021d265fba9c", 00:13:46.757 "is_configured": true, 00:13:46.757 "data_offset": 2048, 00:13:46.757 "data_size": 63488 00:13:46.757 }, 00:13:46.757 { 00:13:46.757 "name": "BaseBdev2", 00:13:46.757 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:46.757 "is_configured": true, 00:13:46.757 "data_offset": 2048, 00:13:46.757 "data_size": 63488 00:13:46.757 } 00:13:46.757 ] 00:13:46.757 }' 00:13:46.757 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.757 08:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.016 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.016 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.016 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:47.016 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.016 [2024-12-13 08:24:59.369719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.275 [2024-12-13 08:24:59.473270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.275 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.275 "name": "raid_bdev1", 00:13:47.275 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:47.275 "strip_size_kb": 0, 00:13:47.275 "state": "online", 00:13:47.275 "raid_level": "raid1", 00:13:47.275 "superblock": true, 00:13:47.275 "num_base_bdevs": 2, 00:13:47.275 "num_base_bdevs_discovered": 1, 00:13:47.275 "num_base_bdevs_operational": 1, 00:13:47.275 "base_bdevs_list": [ 00:13:47.275 { 00:13:47.275 "name": null, 00:13:47.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.275 "is_configured": false, 00:13:47.275 "data_offset": 0, 00:13:47.275 "data_size": 63488 00:13:47.275 }, 00:13:47.275 { 00:13:47.275 "name": "BaseBdev2", 00:13:47.275 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:47.275 "is_configured": true, 00:13:47.275 "data_offset": 2048, 00:13:47.275 "data_size": 63488 00:13:47.276 } 00:13:47.276 ] 00:13:47.276 }' 00:13:47.276 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.276 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.276 [2024-12-13 08:24:59.573455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:47.276 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:47.276 Zero copy mechanism will not be used. 00:13:47.276 Running I/O for 60 seconds... 00:13:47.846 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.846 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.846 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.846 [2024-12-13 08:24:59.937864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.846 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.846 08:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:47.846 [2024-12-13 08:25:00.009291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:47.846 [2024-12-13 08:25:00.011471] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.846 [2024-12-13 08:25:00.126129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:47.846 [2024-12-13 08:25:00.126788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:48.105 [2024-12-13 08:25:00.342794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:48.105 [2024-12-13 08:25:00.343273] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:48.364 196.00 IOPS, 588.00 MiB/s [2024-12-13T08:25:00.729Z] [2024-12-13 08:25:00.586155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:48.624 [2024-12-13 08:25:00.806564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:48.624 [2024-12-13 08:25:00.807035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:48.624 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.624 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.624 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.624 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.624 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.886 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.886 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.886 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.886 08:25:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.886 "name": "raid_bdev1", 00:13:48.886 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:48.886 "strip_size_kb": 0, 00:13:48.886 "state": "online", 00:13:48.886 "raid_level": "raid1", 00:13:48.886 "superblock": true, 00:13:48.886 "num_base_bdevs": 2, 00:13:48.886 "num_base_bdevs_discovered": 2, 00:13:48.886 "num_base_bdevs_operational": 2, 00:13:48.886 "process": { 00:13:48.886 "type": "rebuild", 00:13:48.886 "target": "spare", 00:13:48.886 "progress": { 00:13:48.886 "blocks": 10240, 00:13:48.886 "percent": 16 00:13:48.886 } 00:13:48.886 }, 00:13:48.886 "base_bdevs_list": [ 00:13:48.886 { 00:13:48.886 "name": "spare", 00:13:48.886 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:48.886 "is_configured": true, 00:13:48.886 "data_offset": 2048, 00:13:48.886 "data_size": 63488 00:13:48.886 }, 00:13:48.886 { 00:13:48.886 "name": "BaseBdev2", 00:13:48.886 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:48.886 "is_configured": true, 00:13:48.886 "data_offset": 2048, 00:13:48.886 "data_size": 63488 00:13:48.886 } 00:13:48.886 ] 00:13:48.886 }' 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.886 [2024-12-13 08:25:01.126330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.886 [2024-12-13 08:25:01.126465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:48.886 [2024-12-13 08:25:01.167382] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.886 [2024-12-13 08:25:01.176365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.886 [2024-12-13 08:25:01.176420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.886 [2024-12-13 08:25:01.176439] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.886 [2024-12-13 08:25:01.215741] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.886 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.147 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.147 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.147 "name": "raid_bdev1", 00:13:49.147 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:49.147 "strip_size_kb": 0, 00:13:49.147 "state": "online", 00:13:49.147 "raid_level": "raid1", 00:13:49.147 "superblock": true, 00:13:49.147 "num_base_bdevs": 2, 00:13:49.147 "num_base_bdevs_discovered": 1, 00:13:49.147 "num_base_bdevs_operational": 1, 00:13:49.147 "base_bdevs_list": [ 00:13:49.147 { 00:13:49.147 "name": null, 00:13:49.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.147 "is_configured": false, 00:13:49.147 "data_offset": 0, 00:13:49.147 "data_size": 63488 00:13:49.147 }, 00:13:49.147 { 00:13:49.147 "name": "BaseBdev2", 00:13:49.147 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:49.147 "is_configured": true, 00:13:49.147 "data_offset": 2048, 00:13:49.147 "data_size": 63488 00:13:49.147 } 00:13:49.147 ] 00:13:49.147 }' 00:13:49.147 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.147 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.405 168.50 IOPS, 505.50 MiB/s [2024-12-13T08:25:01.770Z] 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.405 "name": "raid_bdev1", 00:13:49.405 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:49.405 "strip_size_kb": 0, 00:13:49.405 "state": "online", 00:13:49.405 "raid_level": "raid1", 00:13:49.405 "superblock": true, 00:13:49.405 "num_base_bdevs": 2, 00:13:49.405 "num_base_bdevs_discovered": 1, 00:13:49.405 "num_base_bdevs_operational": 1, 00:13:49.405 "base_bdevs_list": [ 00:13:49.405 { 00:13:49.405 "name": null, 00:13:49.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.405 "is_configured": false, 00:13:49.405 "data_offset": 0, 00:13:49.405 "data_size": 63488 00:13:49.405 }, 00:13:49.405 { 00:13:49.405 "name": "BaseBdev2", 00:13:49.405 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:49.405 "is_configured": true, 00:13:49.405 "data_offset": 2048, 00:13:49.405 "data_size": 63488 00:13:49.405 } 00:13:49.405 ] 00:13:49.405 }' 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.405 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.664 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.664 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.664 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.664 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.664 [2024-12-13 08:25:01.806454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.664 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.664 08:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:49.664 [2024-12-13 08:25:01.857254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:49.664 [2024-12-13 08:25:01.859295] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.664 [2024-12-13 08:25:01.968002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.664 [2024-12-13 08:25:01.968645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.923 [2024-12-13 08:25:02.190536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.923 [2024-12-13 08:25:02.190921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:50.493 165.67 IOPS, 497.00 MiB/s [2024-12-13T08:25:02.858Z] [2024-12-13 08:25:02.649906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.493 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.752 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.752 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.752 "name": "raid_bdev1", 00:13:50.752 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:50.752 "strip_size_kb": 0, 00:13:50.752 "state": "online", 00:13:50.752 "raid_level": "raid1", 00:13:50.752 "superblock": true, 00:13:50.752 "num_base_bdevs": 2, 00:13:50.752 "num_base_bdevs_discovered": 2, 00:13:50.752 "num_base_bdevs_operational": 2, 00:13:50.752 "process": { 00:13:50.752 "type": "rebuild", 00:13:50.752 "target": "spare", 00:13:50.752 "progress": { 00:13:50.752 "blocks": 12288, 00:13:50.752 "percent": 19 00:13:50.752 } 00:13:50.752 }, 00:13:50.752 "base_bdevs_list": [ 00:13:50.752 { 00:13:50.752 "name": "spare", 00:13:50.752 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:50.752 "is_configured": true, 00:13:50.752 "data_offset": 2048, 00:13:50.752 "data_size": 63488 00:13:50.752 }, 00:13:50.752 { 00:13:50.752 "name": "BaseBdev2", 00:13:50.752 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:50.752 "is_configured": true, 00:13:50.752 "data_offset": 2048, 00:13:50.752 "data_size": 63488 00:13:50.753 } 00:13:50.753 ] 00:13:50.753 }' 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.753 [2024-12-13 08:25:02.992748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:50.753 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=423 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.753 08:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.753 "name": "raid_bdev1", 00:13:50.753 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:50.753 "strip_size_kb": 0, 00:13:50.753 "state": "online", 00:13:50.753 "raid_level": "raid1", 00:13:50.753 "superblock": true, 00:13:50.753 "num_base_bdevs": 2, 00:13:50.753 "num_base_bdevs_discovered": 2, 00:13:50.753 "num_base_bdevs_operational": 2, 00:13:50.753 "process": { 00:13:50.753 "type": "rebuild", 00:13:50.753 "target": "spare", 00:13:50.753 "progress": { 00:13:50.753 "blocks": 16384, 00:13:50.753 "percent": 25 00:13:50.753 } 00:13:50.753 }, 00:13:50.753 "base_bdevs_list": [ 00:13:50.753 { 00:13:50.753 "name": "spare", 00:13:50.753 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:50.753 "is_configured": true, 00:13:50.753 "data_offset": 2048, 00:13:50.753 "data_size": 63488 00:13:50.753 }, 00:13:50.753 { 00:13:50.753 "name": "BaseBdev2", 00:13:50.753 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:50.753 "is_configured": true, 00:13:50.753 "data_offset": 2048, 00:13:50.753 "data_size": 63488 00:13:50.753 } 00:13:50.753 ] 00:13:50.753 }' 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.753 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.013 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.013 08:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.013 [2024-12-13 08:25:03.229308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:51.531 150.25 IOPS, 450.75 MiB/s [2024-12-13T08:25:03.896Z] [2024-12-13 08:25:03.710268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:51.801 [2024-12-13 08:25:03.931884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.801 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.064 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.064 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.064 "name": "raid_bdev1", 00:13:52.064 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:52.064 "strip_size_kb": 0, 00:13:52.064 "state": "online", 00:13:52.064 "raid_level": "raid1", 00:13:52.064 "superblock": true, 00:13:52.064 "num_base_bdevs": 2, 00:13:52.064 "num_base_bdevs_discovered": 2, 00:13:52.064 "num_base_bdevs_operational": 2, 00:13:52.064 "process": { 00:13:52.064 "type": "rebuild", 00:13:52.064 "target": "spare", 00:13:52.064 "progress": { 00:13:52.064 "blocks": 30720, 00:13:52.064 "percent": 48 00:13:52.064 } 00:13:52.064 }, 00:13:52.064 "base_bdevs_list": [ 00:13:52.064 { 00:13:52.064 "name": "spare", 00:13:52.064 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:52.064 "is_configured": true, 00:13:52.064 "data_offset": 2048, 00:13:52.064 "data_size": 63488 00:13:52.064 }, 00:13:52.064 { 00:13:52.064 "name": "BaseBdev2", 00:13:52.064 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:52.064 "is_configured": true, 00:13:52.064 "data_offset": 2048, 00:13:52.064 "data_size": 63488 00:13:52.064 } 00:13:52.064 ] 00:13:52.064 }' 00:13:52.064 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.064 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.064 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.064 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.064 08:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.324 130.40 IOPS, 391.20 MiB/s [2024-12-13T08:25:04.689Z] [2024-12-13 08:25:04.619536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.297 "name": "raid_bdev1", 00:13:53.297 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:53.297 "strip_size_kb": 0, 00:13:53.297 "state": "online", 00:13:53.297 "raid_level": "raid1", 00:13:53.297 "superblock": true, 00:13:53.297 "num_base_bdevs": 2, 00:13:53.297 "num_base_bdevs_discovered": 2, 00:13:53.297 "num_base_bdevs_operational": 2, 00:13:53.297 "process": { 00:13:53.297 "type": "rebuild", 00:13:53.297 "target": "spare", 00:13:53.297 "progress": { 00:13:53.297 "blocks": 53248, 00:13:53.297 "percent": 83 00:13:53.297 } 00:13:53.297 }, 00:13:53.297 "base_bdevs_list": [ 00:13:53.297 { 00:13:53.297 "name": "spare", 00:13:53.297 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:53.297 "is_configured": true, 00:13:53.297 "data_offset": 2048, 00:13:53.297 "data_size": 63488 00:13:53.297 }, 00:13:53.297 { 00:13:53.297 "name": "BaseBdev2", 00:13:53.297 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:53.297 "is_configured": true, 00:13:53.297 "data_offset": 2048, 00:13:53.297 "data_size": 63488 00:13:53.297 } 00:13:53.297 ] 00:13:53.297 }' 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.297 08:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.558 115.50 IOPS, 346.50 MiB/s [2024-12-13T08:25:05.923Z] [2024-12-13 08:25:05.812385] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:53.558 [2024-12-13 08:25:05.917356] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:53.558 [2024-12-13 08:25:05.919545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.126 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.387 "name": "raid_bdev1", 00:13:54.387 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:54.387 "strip_size_kb": 0, 00:13:54.387 "state": "online", 00:13:54.387 "raid_level": "raid1", 00:13:54.387 "superblock": true, 00:13:54.387 "num_base_bdevs": 2, 00:13:54.387 "num_base_bdevs_discovered": 2, 00:13:54.387 "num_base_bdevs_operational": 2, 00:13:54.387 "base_bdevs_list": [ 00:13:54.387 { 00:13:54.387 "name": "spare", 00:13:54.387 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:54.387 "is_configured": true, 00:13:54.387 "data_offset": 2048, 00:13:54.387 "data_size": 63488 00:13:54.387 }, 00:13:54.387 { 00:13:54.387 "name": "BaseBdev2", 00:13:54.387 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:54.387 "is_configured": true, 00:13:54.387 "data_offset": 2048, 00:13:54.387 "data_size": 63488 00:13:54.387 } 00:13:54.387 ] 00:13:54.387 }' 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.387 104.00 IOPS, 312.00 MiB/s [2024-12-13T08:25:06.752Z] 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.387 "name": "raid_bdev1", 00:13:54.387 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:54.387 "strip_size_kb": 0, 00:13:54.387 "state": "online", 00:13:54.387 "raid_level": "raid1", 00:13:54.387 "superblock": true, 00:13:54.387 "num_base_bdevs": 2, 00:13:54.387 "num_base_bdevs_discovered": 2, 00:13:54.387 "num_base_bdevs_operational": 2, 00:13:54.387 "base_bdevs_list": [ 00:13:54.387 { 00:13:54.387 "name": "spare", 00:13:54.387 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:54.387 "is_configured": true, 00:13:54.387 "data_offset": 2048, 00:13:54.387 "data_size": 63488 00:13:54.387 }, 00:13:54.387 { 00:13:54.387 "name": "BaseBdev2", 00:13:54.387 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:54.387 "is_configured": true, 00:13:54.387 "data_offset": 2048, 00:13:54.387 "data_size": 63488 00:13:54.387 } 00:13:54.387 ] 00:13:54.387 }' 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.387 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.649 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.649 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.649 "name": "raid_bdev1", 00:13:54.649 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:54.649 "strip_size_kb": 0, 00:13:54.649 "state": "online", 00:13:54.649 "raid_level": "raid1", 00:13:54.650 "superblock": true, 00:13:54.650 "num_base_bdevs": 2, 00:13:54.650 "num_base_bdevs_discovered": 2, 00:13:54.650 "num_base_bdevs_operational": 2, 00:13:54.650 "base_bdevs_list": [ 00:13:54.650 { 00:13:54.650 "name": "spare", 00:13:54.650 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:54.650 "is_configured": true, 00:13:54.650 "data_offset": 2048, 00:13:54.650 "data_size": 63488 00:13:54.650 }, 00:13:54.650 { 00:13:54.650 "name": "BaseBdev2", 00:13:54.650 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:54.650 "is_configured": true, 00:13:54.650 "data_offset": 2048, 00:13:54.650 "data_size": 63488 00:13:54.650 } 00:13:54.650 ] 00:13:54.650 }' 00:13:54.650 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.650 08:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:54.910 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.910 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 [2024-12-13 08:25:07.166631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.910 [2024-12-13 08:25:07.166722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.910 00:13:54.910 Latency(us) 00:13:54.910 [2024-12-13T08:25:07.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.910 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:54.910 raid_bdev1 : 7.68 97.77 293.30 0.00 0.00 15184.02 313.01 113099.68 00:13:54.910 [2024-12-13T08:25:07.275Z] =================================================================================================================== 00:13:54.910 [2024-12-13T08:25:07.275Z] Total : 97.77 293.30 0.00 0.00 15184.02 313.01 113099.68 00:13:54.910 [2024-12-13 08:25:07.264233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.910 [2024-12-13 08:25:07.264384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.910 [2024-12-13 08:25:07.264511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.910 [2024-12-13 08:25:07.264561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:54.910 { 00:13:54.910 "results": [ 00:13:54.910 { 00:13:54.910 "job": "raid_bdev1", 00:13:54.910 "core_mask": "0x1", 00:13:54.910 "workload": "randrw", 00:13:54.910 "percentage": 50, 00:13:54.910 "status": "finished", 00:13:54.910 "queue_depth": 2, 00:13:54.910 "io_size": 3145728, 00:13:54.910 "runtime": 7.68146, 00:13:54.910 "iops": 97.76787225345181, 00:13:54.910 "mibps": 293.30361676035545, 00:13:54.910 "io_failed": 0, 00:13:54.910 "io_timeout": 0, 00:13:54.910 "avg_latency_us": 15184.023165619057, 00:13:54.910 "min_latency_us": 313.0131004366812, 00:13:54.910 "max_latency_us": 113099.68209606987 00:13:54.910 } 00:13:54.910 ], 00:13:54.910 "core_count": 1 00:13:54.910 } 00:13:54.910 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.170 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:55.170 /dev/nbd0 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.429 1+0 records in 00:13:55.429 1+0 records out 00:13:55.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585908 s, 7.0 MB/s 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.429 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:55.688 /dev/nbd1 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.688 1+0 records in 00:13:55.688 1+0 records out 00:13:55.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251642 s, 16.3 MB/s 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.688 08:25:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:55.688 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:55.688 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.688 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:55.688 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.688 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:55.688 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.688 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.946 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.205 [2024-12-13 08:25:08.497857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:56.205 [2024-12-13 08:25:08.498000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.205 [2024-12-13 08:25:08.498072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:56.205 [2024-12-13 08:25:08.498158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.205 [2024-12-13 08:25:08.500584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.205 [2024-12-13 08:25:08.500662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:56.205 [2024-12-13 08:25:08.500770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:56.205 [2024-12-13 08:25:08.500841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.205 [2024-12-13 08:25:08.500996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.205 spare 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.205 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.464 [2024-12-13 08:25:08.600934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:56.464 [2024-12-13 08:25:08.601086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.464 [2024-12-13 08:25:08.601486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:56.464 [2024-12-13 08:25:08.601741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:56.464 [2024-12-13 08:25:08.601786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:56.464 [2024-12-13 08:25:08.602048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.464 "name": "raid_bdev1", 00:13:56.464 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:56.464 "strip_size_kb": 0, 00:13:56.464 "state": "online", 00:13:56.464 "raid_level": "raid1", 00:13:56.464 "superblock": true, 00:13:56.464 "num_base_bdevs": 2, 00:13:56.464 "num_base_bdevs_discovered": 2, 00:13:56.464 "num_base_bdevs_operational": 2, 00:13:56.464 "base_bdevs_list": [ 00:13:56.464 { 00:13:56.464 "name": "spare", 00:13:56.464 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:56.464 "is_configured": true, 00:13:56.464 "data_offset": 2048, 00:13:56.464 "data_size": 63488 00:13:56.464 }, 00:13:56.464 { 00:13:56.464 "name": "BaseBdev2", 00:13:56.464 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:56.464 "is_configured": true, 00:13:56.464 "data_offset": 2048, 00:13:56.464 "data_size": 63488 00:13:56.464 } 00:13:56.464 ] 00:13:56.464 }' 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.464 08:25:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.033 "name": "raid_bdev1", 00:13:57.033 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:57.033 "strip_size_kb": 0, 00:13:57.033 "state": "online", 00:13:57.033 "raid_level": "raid1", 00:13:57.033 "superblock": true, 00:13:57.033 "num_base_bdevs": 2, 00:13:57.033 "num_base_bdevs_discovered": 2, 00:13:57.033 "num_base_bdevs_operational": 2, 00:13:57.033 "base_bdevs_list": [ 00:13:57.033 { 00:13:57.033 "name": "spare", 00:13:57.033 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:57.033 "is_configured": true, 00:13:57.033 "data_offset": 2048, 00:13:57.033 "data_size": 63488 00:13:57.033 }, 00:13:57.033 { 00:13:57.033 "name": "BaseBdev2", 00:13:57.033 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:57.033 "is_configured": true, 00:13:57.033 "data_offset": 2048, 00:13:57.033 "data_size": 63488 00:13:57.033 } 00:13:57.033 ] 00:13:57.033 }' 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.033 [2024-12-13 08:25:09.304992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.033 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.033 "name": "raid_bdev1", 00:13:57.033 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:57.033 "strip_size_kb": 0, 00:13:57.033 "state": "online", 00:13:57.033 "raid_level": "raid1", 00:13:57.033 "superblock": true, 00:13:57.034 "num_base_bdevs": 2, 00:13:57.034 "num_base_bdevs_discovered": 1, 00:13:57.034 "num_base_bdevs_operational": 1, 00:13:57.034 "base_bdevs_list": [ 00:13:57.034 { 00:13:57.034 "name": null, 00:13:57.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.034 "is_configured": false, 00:13:57.034 "data_offset": 0, 00:13:57.034 "data_size": 63488 00:13:57.034 }, 00:13:57.034 { 00:13:57.034 "name": "BaseBdev2", 00:13:57.034 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:57.034 "is_configured": true, 00:13:57.034 "data_offset": 2048, 00:13:57.034 "data_size": 63488 00:13:57.034 } 00:13:57.034 ] 00:13:57.034 }' 00:13:57.034 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.034 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.601 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.601 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.601 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.601 [2024-12-13 08:25:09.764302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.601 [2024-12-13 08:25:09.764541] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:57.601 [2024-12-13 08:25:09.764564] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:57.601 [2024-12-13 08:25:09.764608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.601 [2024-12-13 08:25:09.782494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:57.601 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.601 08:25:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:57.601 [2024-12-13 08:25:09.784556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.606 "name": "raid_bdev1", 00:13:58.606 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:58.606 "strip_size_kb": 0, 00:13:58.606 "state": "online", 00:13:58.606 "raid_level": "raid1", 00:13:58.606 "superblock": true, 00:13:58.606 "num_base_bdevs": 2, 00:13:58.606 "num_base_bdevs_discovered": 2, 00:13:58.606 "num_base_bdevs_operational": 2, 00:13:58.606 "process": { 00:13:58.606 "type": "rebuild", 00:13:58.606 "target": "spare", 00:13:58.606 "progress": { 00:13:58.606 "blocks": 20480, 00:13:58.606 "percent": 32 00:13:58.606 } 00:13:58.606 }, 00:13:58.606 "base_bdevs_list": [ 00:13:58.606 { 00:13:58.606 "name": "spare", 00:13:58.606 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:13:58.606 "is_configured": true, 00:13:58.606 "data_offset": 2048, 00:13:58.606 "data_size": 63488 00:13:58.606 }, 00:13:58.606 { 00:13:58.606 "name": "BaseBdev2", 00:13:58.606 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:58.606 "is_configured": true, 00:13:58.606 "data_offset": 2048, 00:13:58.606 "data_size": 63488 00:13:58.606 } 00:13:58.606 ] 00:13:58.606 }' 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.606 08:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.606 [2024-12-13 08:25:10.940634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.866 [2024-12-13 08:25:10.990892] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.866 [2024-12-13 08:25:10.990978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.866 [2024-12-13 08:25:10.990994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.866 [2024-12-13 08:25:10.991003] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.866 "name": "raid_bdev1", 00:13:58.866 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:13:58.866 "strip_size_kb": 0, 00:13:58.866 "state": "online", 00:13:58.866 "raid_level": "raid1", 00:13:58.866 "superblock": true, 00:13:58.866 "num_base_bdevs": 2, 00:13:58.866 "num_base_bdevs_discovered": 1, 00:13:58.866 "num_base_bdevs_operational": 1, 00:13:58.866 "base_bdevs_list": [ 00:13:58.866 { 00:13:58.866 "name": null, 00:13:58.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.866 "is_configured": false, 00:13:58.866 "data_offset": 0, 00:13:58.866 "data_size": 63488 00:13:58.866 }, 00:13:58.866 { 00:13:58.866 "name": "BaseBdev2", 00:13:58.866 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:13:58.866 "is_configured": true, 00:13:58.866 "data_offset": 2048, 00:13:58.866 "data_size": 63488 00:13:58.866 } 00:13:58.866 ] 00:13:58.866 }' 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.866 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.436 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.436 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.436 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.436 [2024-12-13 08:25:11.520383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.436 [2024-12-13 08:25:11.520542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.436 [2024-12-13 08:25:11.520608] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:59.436 [2024-12-13 08:25:11.520653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.436 [2024-12-13 08:25:11.521353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.436 [2024-12-13 08:25:11.521434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.436 [2024-12-13 08:25:11.521593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:59.436 [2024-12-13 08:25:11.521644] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:59.436 [2024-12-13 08:25:11.521692] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:59.436 [2024-12-13 08:25:11.521761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.436 [2024-12-13 08:25:11.540496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:59.436 spare 00:13:59.436 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.436 08:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:59.436 [2024-12-13 08:25:11.542845] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.375 "name": "raid_bdev1", 00:14:00.375 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:14:00.375 "strip_size_kb": 0, 00:14:00.375 "state": "online", 00:14:00.375 "raid_level": "raid1", 00:14:00.375 "superblock": true, 00:14:00.375 "num_base_bdevs": 2, 00:14:00.375 "num_base_bdevs_discovered": 2, 00:14:00.375 "num_base_bdevs_operational": 2, 00:14:00.375 "process": { 00:14:00.375 "type": "rebuild", 00:14:00.375 "target": "spare", 00:14:00.375 "progress": { 00:14:00.375 "blocks": 20480, 00:14:00.375 "percent": 32 00:14:00.375 } 00:14:00.375 }, 00:14:00.375 "base_bdevs_list": [ 00:14:00.375 { 00:14:00.375 "name": "spare", 00:14:00.375 "uuid": "9d7a0c2f-508a-5c52-a1c1-2301b014625c", 00:14:00.375 "is_configured": true, 00:14:00.375 "data_offset": 2048, 00:14:00.375 "data_size": 63488 00:14:00.375 }, 00:14:00.375 { 00:14:00.375 "name": "BaseBdev2", 00:14:00.375 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:14:00.375 "is_configured": true, 00:14:00.375 "data_offset": 2048, 00:14:00.375 "data_size": 63488 00:14:00.375 } 00:14:00.375 ] 00:14:00.375 }' 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.375 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.375 [2024-12-13 08:25:12.706378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.636 [2024-12-13 08:25:12.749315] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.636 [2024-12-13 08:25:12.749422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.636 [2024-12-13 08:25:12.749450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.636 [2024-12-13 08:25:12.749460] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.636 "name": "raid_bdev1", 00:14:00.636 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:14:00.636 "strip_size_kb": 0, 00:14:00.636 "state": "online", 00:14:00.636 "raid_level": "raid1", 00:14:00.636 "superblock": true, 00:14:00.636 "num_base_bdevs": 2, 00:14:00.636 "num_base_bdevs_discovered": 1, 00:14:00.636 "num_base_bdevs_operational": 1, 00:14:00.636 "base_bdevs_list": [ 00:14:00.636 { 00:14:00.636 "name": null, 00:14:00.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.636 "is_configured": false, 00:14:00.636 "data_offset": 0, 00:14:00.636 "data_size": 63488 00:14:00.636 }, 00:14:00.636 { 00:14:00.636 "name": "BaseBdev2", 00:14:00.636 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:14:00.636 "is_configured": true, 00:14:00.636 "data_offset": 2048, 00:14:00.636 "data_size": 63488 00:14:00.636 } 00:14:00.636 ] 00:14:00.636 }' 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.636 08:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.895 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.895 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.896 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.896 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.896 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.896 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.896 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.896 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.896 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.155 "name": "raid_bdev1", 00:14:01.155 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:14:01.155 "strip_size_kb": 0, 00:14:01.155 "state": "online", 00:14:01.155 "raid_level": "raid1", 00:14:01.155 "superblock": true, 00:14:01.155 "num_base_bdevs": 2, 00:14:01.155 "num_base_bdevs_discovered": 1, 00:14:01.155 "num_base_bdevs_operational": 1, 00:14:01.155 "base_bdevs_list": [ 00:14:01.155 { 00:14:01.155 "name": null, 00:14:01.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.155 "is_configured": false, 00:14:01.155 "data_offset": 0, 00:14:01.155 "data_size": 63488 00:14:01.155 }, 00:14:01.155 { 00:14:01.155 "name": "BaseBdev2", 00:14:01.155 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:14:01.155 "is_configured": true, 00:14:01.155 "data_offset": 2048, 00:14:01.155 "data_size": 63488 00:14:01.155 } 00:14:01.155 ] 00:14:01.155 }' 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.155 [2024-12-13 08:25:13.386558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:01.155 [2024-12-13 08:25:13.386607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.155 [2024-12-13 08:25:13.386627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:01.155 [2024-12-13 08:25:13.386635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.155 [2024-12-13 08:25:13.387085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.155 [2024-12-13 08:25:13.387119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:01.155 [2024-12-13 08:25:13.387221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:01.155 [2024-12-13 08:25:13.387235] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:01.155 [2024-12-13 08:25:13.387247] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:01.155 [2024-12-13 08:25:13.387257] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:01.155 BaseBdev1 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.155 08:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.091 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.091 "name": "raid_bdev1", 00:14:02.091 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:14:02.091 "strip_size_kb": 0, 00:14:02.091 "state": "online", 00:14:02.091 "raid_level": "raid1", 00:14:02.092 "superblock": true, 00:14:02.092 "num_base_bdevs": 2, 00:14:02.092 "num_base_bdevs_discovered": 1, 00:14:02.092 "num_base_bdevs_operational": 1, 00:14:02.092 "base_bdevs_list": [ 00:14:02.092 { 00:14:02.092 "name": null, 00:14:02.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.092 "is_configured": false, 00:14:02.092 "data_offset": 0, 00:14:02.092 "data_size": 63488 00:14:02.092 }, 00:14:02.092 { 00:14:02.092 "name": "BaseBdev2", 00:14:02.092 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:14:02.092 "is_configured": true, 00:14:02.092 "data_offset": 2048, 00:14:02.092 "data_size": 63488 00:14:02.092 } 00:14:02.092 ] 00:14:02.092 }' 00:14:02.092 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.092 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.661 "name": "raid_bdev1", 00:14:02.661 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:14:02.661 "strip_size_kb": 0, 00:14:02.661 "state": "online", 00:14:02.661 "raid_level": "raid1", 00:14:02.661 "superblock": true, 00:14:02.661 "num_base_bdevs": 2, 00:14:02.661 "num_base_bdevs_discovered": 1, 00:14:02.661 "num_base_bdevs_operational": 1, 00:14:02.661 "base_bdevs_list": [ 00:14:02.661 { 00:14:02.661 "name": null, 00:14:02.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.661 "is_configured": false, 00:14:02.661 "data_offset": 0, 00:14:02.661 "data_size": 63488 00:14:02.661 }, 00:14:02.661 { 00:14:02.661 "name": "BaseBdev2", 00:14:02.661 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:14:02.661 "is_configured": true, 00:14:02.661 "data_offset": 2048, 00:14:02.661 "data_size": 63488 00:14:02.661 } 00:14:02.661 ] 00:14:02.661 }' 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:02.661 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.662 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.662 [2024-12-13 08:25:14.928236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.662 [2024-12-13 08:25:14.928408] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:02.662 [2024-12-13 08:25:14.928431] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:02.662 request: 00:14:02.662 { 00:14:02.662 "base_bdev": "BaseBdev1", 00:14:02.662 "raid_bdev": "raid_bdev1", 00:14:02.662 "method": "bdev_raid_add_base_bdev", 00:14:02.662 "req_id": 1 00:14:02.662 } 00:14:02.662 Got JSON-RPC error response 00:14:02.662 response: 00:14:02.662 { 00:14:02.662 "code": -22, 00:14:02.662 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:02.662 } 00:14:02.662 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:02.662 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:02.662 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.662 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.662 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.662 08:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.599 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.858 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.858 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.858 "name": "raid_bdev1", 00:14:03.858 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:14:03.858 "strip_size_kb": 0, 00:14:03.858 "state": "online", 00:14:03.858 "raid_level": "raid1", 00:14:03.858 "superblock": true, 00:14:03.858 "num_base_bdevs": 2, 00:14:03.858 "num_base_bdevs_discovered": 1, 00:14:03.858 "num_base_bdevs_operational": 1, 00:14:03.858 "base_bdevs_list": [ 00:14:03.858 { 00:14:03.858 "name": null, 00:14:03.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.858 "is_configured": false, 00:14:03.858 "data_offset": 0, 00:14:03.858 "data_size": 63488 00:14:03.858 }, 00:14:03.858 { 00:14:03.858 "name": "BaseBdev2", 00:14:03.858 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:14:03.858 "is_configured": true, 00:14:03.858 "data_offset": 2048, 00:14:03.858 "data_size": 63488 00:14:03.858 } 00:14:03.858 ] 00:14:03.858 }' 00:14:03.858 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.858 08:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.117 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.117 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.117 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.117 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.118 "name": "raid_bdev1", 00:14:04.118 "uuid": "12c43e85-12af-42be-b614-ca97f2a606f7", 00:14:04.118 "strip_size_kb": 0, 00:14:04.118 "state": "online", 00:14:04.118 "raid_level": "raid1", 00:14:04.118 "superblock": true, 00:14:04.118 "num_base_bdevs": 2, 00:14:04.118 "num_base_bdevs_discovered": 1, 00:14:04.118 "num_base_bdevs_operational": 1, 00:14:04.118 "base_bdevs_list": [ 00:14:04.118 { 00:14:04.118 "name": null, 00:14:04.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.118 "is_configured": false, 00:14:04.118 "data_offset": 0, 00:14:04.118 "data_size": 63488 00:14:04.118 }, 00:14:04.118 { 00:14:04.118 "name": "BaseBdev2", 00:14:04.118 "uuid": "df79be87-88fd-593f-aa1b-726c20bc1827", 00:14:04.118 "is_configured": true, 00:14:04.118 "data_offset": 2048, 00:14:04.118 "data_size": 63488 00:14:04.118 } 00:14:04.118 ] 00:14:04.118 }' 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.118 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77024 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77024 ']' 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77024 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77024 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77024' 00:14:04.377 killing process with pid 77024 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77024 00:14:04.377 Received shutdown signal, test time was about 16.982798 seconds 00:14:04.377 00:14:04.377 Latency(us) 00:14:04.377 [2024-12-13T08:25:16.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.377 [2024-12-13T08:25:16.742Z] =================================================================================================================== 00:14:04.377 [2024-12-13T08:25:16.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.377 [2024-12-13 08:25:16.525876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.377 [2024-12-13 08:25:16.526011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.377 08:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77024 00:14:04.377 [2024-12-13 08:25:16.526065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.377 [2024-12-13 08:25:16.526076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:04.636 [2024-12-13 08:25:16.753927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.571 08:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:05.571 00:14:05.571 real 0m20.166s 00:14:05.571 user 0m26.398s 00:14:05.571 sys 0m2.259s 00:14:05.571 ************************************ 00:14:05.571 END TEST raid_rebuild_test_sb_io 00:14:05.571 ************************************ 00:14:05.572 08:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.572 08:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.831 08:25:17 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:05.831 08:25:17 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:05.831 08:25:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:05.831 08:25:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.831 08:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.831 ************************************ 00:14:05.831 START TEST raid_rebuild_test 00:14:05.831 ************************************ 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77715 00:14:05.831 08:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77715 00:14:05.831 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77715 ']' 00:14:05.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.831 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.831 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.831 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.831 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.831 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.831 [2024-12-13 08:25:18.103010] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:14:05.831 [2024-12-13 08:25:18.103424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77715 ] 00:14:05.831 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.831 Zero copy mechanism will not be used. 00:14:06.090 [2024-12-13 08:25:18.284343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.090 [2024-12-13 08:25:18.413249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.349 [2024-12-13 08:25:18.616395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.349 [2024-12-13 08:25:18.616561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.608 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.608 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:06.608 08:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.608 08:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.608 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.608 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 BaseBdev1_malloc 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 [2024-12-13 08:25:18.987645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.868 [2024-12-13 08:25:18.987718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.868 [2024-12-13 08:25:18.987744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.868 [2024-12-13 08:25:18.987756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.868 [2024-12-13 08:25:18.989903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.868 [2024-12-13 08:25:18.989948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.868 BaseBdev1 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.868 08:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 BaseBdev2_malloc 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 [2024-12-13 08:25:19.042819] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:06.868 [2024-12-13 08:25:19.042948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.868 [2024-12-13 08:25:19.042973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.868 [2024-12-13 08:25:19.042986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.868 [2024-12-13 08:25:19.045157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.868 [2024-12-13 08:25:19.045194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.868 BaseBdev2 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 BaseBdev3_malloc 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 [2024-12-13 08:25:19.108693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:06.868 [2024-12-13 08:25:19.108756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.868 [2024-12-13 08:25:19.108779] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:06.868 [2024-12-13 08:25:19.108790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.868 [2024-12-13 08:25:19.110868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.868 [2024-12-13 08:25:19.110966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:06.868 BaseBdev3 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 BaseBdev4_malloc 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.868 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 [2024-12-13 08:25:19.156377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:06.869 [2024-12-13 08:25:19.156493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.869 [2024-12-13 08:25:19.156528] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:06.869 [2024-12-13 08:25:19.156539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.869 [2024-12-13 08:25:19.158520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.869 [2024-12-13 08:25:19.158559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:06.869 BaseBdev4 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 spare_malloc 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 spare_delay 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 [2024-12-13 08:25:19.218279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.869 [2024-12-13 08:25:19.218387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.869 [2024-12-13 08:25:19.218407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:06.869 [2024-12-13 08:25:19.218417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.869 [2024-12-13 08:25:19.220458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.869 [2024-12-13 08:25:19.220498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.869 spare 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.869 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.869 [2024-12-13 08:25:19.230299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.129 [2024-12-13 08:25:19.232047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.129 [2024-12-13 08:25:19.232112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.129 [2024-12-13 08:25:19.232178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:07.129 [2024-12-13 08:25:19.232267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:07.129 [2024-12-13 08:25:19.232283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:07.129 [2024-12-13 08:25:19.232524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:07.129 [2024-12-13 08:25:19.232696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:07.129 [2024-12-13 08:25:19.232708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:07.129 [2024-12-13 08:25:19.232844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.129 "name": "raid_bdev1", 00:14:07.129 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:07.129 "strip_size_kb": 0, 00:14:07.129 "state": "online", 00:14:07.129 "raid_level": "raid1", 00:14:07.129 "superblock": false, 00:14:07.129 "num_base_bdevs": 4, 00:14:07.129 "num_base_bdevs_discovered": 4, 00:14:07.129 "num_base_bdevs_operational": 4, 00:14:07.129 "base_bdevs_list": [ 00:14:07.129 { 00:14:07.129 "name": "BaseBdev1", 00:14:07.129 "uuid": "c01574a2-c9c6-5b5f-a410-126caced9965", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 0, 00:14:07.129 "data_size": 65536 00:14:07.129 }, 00:14:07.129 { 00:14:07.129 "name": "BaseBdev2", 00:14:07.129 "uuid": "0e6ccb56-bbed-5335-84f6-62e64d6d0a49", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 0, 00:14:07.129 "data_size": 65536 00:14:07.129 }, 00:14:07.129 { 00:14:07.129 "name": "BaseBdev3", 00:14:07.129 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 0, 00:14:07.129 "data_size": 65536 00:14:07.129 }, 00:14:07.129 { 00:14:07.129 "name": "BaseBdev4", 00:14:07.129 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 0, 00:14:07.129 "data_size": 65536 00:14:07.129 } 00:14:07.129 ] 00:14:07.129 }' 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.129 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.389 [2024-12-13 08:25:19.681943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.389 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:07.649 [2024-12-13 08:25:19.945187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:07.649 /dev/nbd0 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.649 1+0 records in 00:14:07.649 1+0 records out 00:14:07.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560987 s, 7.3 MB/s 00:14:07.649 08:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:07.649 08:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:14.278 65536+0 records in 00:14:14.278 65536+0 records out 00:14:14.278 33554432 bytes (34 MB, 32 MiB) copied, 5.67131 s, 5.9 MB/s 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:14.278 [2024-12-13 08:25:25.883276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.278 [2024-12-13 08:25:25.919334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.278 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.279 "name": "raid_bdev1", 00:14:14.279 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:14.279 "strip_size_kb": 0, 00:14:14.279 "state": "online", 00:14:14.279 "raid_level": "raid1", 00:14:14.279 "superblock": false, 00:14:14.279 "num_base_bdevs": 4, 00:14:14.279 "num_base_bdevs_discovered": 3, 00:14:14.279 "num_base_bdevs_operational": 3, 00:14:14.279 "base_bdevs_list": [ 00:14:14.279 { 00:14:14.279 "name": null, 00:14:14.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.279 "is_configured": false, 00:14:14.279 "data_offset": 0, 00:14:14.279 "data_size": 65536 00:14:14.279 }, 00:14:14.279 { 00:14:14.279 "name": "BaseBdev2", 00:14:14.279 "uuid": "0e6ccb56-bbed-5335-84f6-62e64d6d0a49", 00:14:14.279 "is_configured": true, 00:14:14.279 "data_offset": 0, 00:14:14.279 "data_size": 65536 00:14:14.279 }, 00:14:14.279 { 00:14:14.279 "name": "BaseBdev3", 00:14:14.279 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:14.279 "is_configured": true, 00:14:14.279 "data_offset": 0, 00:14:14.279 "data_size": 65536 00:14:14.279 }, 00:14:14.279 { 00:14:14.279 "name": "BaseBdev4", 00:14:14.279 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:14.279 "is_configured": true, 00:14:14.279 "data_offset": 0, 00:14:14.279 "data_size": 65536 00:14:14.279 } 00:14:14.279 ] 00:14:14.279 }' 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.279 08:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.279 08:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.279 08:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.279 08:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.279 [2024-12-13 08:25:26.418600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.279 [2024-12-13 08:25:26.433040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:14.279 08:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.279 08:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:14.279 [2024-12-13 08:25:26.435045] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.216 "name": "raid_bdev1", 00:14:15.216 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:15.216 "strip_size_kb": 0, 00:14:15.216 "state": "online", 00:14:15.216 "raid_level": "raid1", 00:14:15.216 "superblock": false, 00:14:15.216 "num_base_bdevs": 4, 00:14:15.216 "num_base_bdevs_discovered": 4, 00:14:15.216 "num_base_bdevs_operational": 4, 00:14:15.216 "process": { 00:14:15.216 "type": "rebuild", 00:14:15.216 "target": "spare", 00:14:15.216 "progress": { 00:14:15.216 "blocks": 20480, 00:14:15.216 "percent": 31 00:14:15.216 } 00:14:15.216 }, 00:14:15.216 "base_bdevs_list": [ 00:14:15.216 { 00:14:15.216 "name": "spare", 00:14:15.216 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:15.216 "is_configured": true, 00:14:15.216 "data_offset": 0, 00:14:15.216 "data_size": 65536 00:14:15.216 }, 00:14:15.216 { 00:14:15.216 "name": "BaseBdev2", 00:14:15.216 "uuid": "0e6ccb56-bbed-5335-84f6-62e64d6d0a49", 00:14:15.216 "is_configured": true, 00:14:15.216 "data_offset": 0, 00:14:15.216 "data_size": 65536 00:14:15.216 }, 00:14:15.216 { 00:14:15.216 "name": "BaseBdev3", 00:14:15.216 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:15.216 "is_configured": true, 00:14:15.216 "data_offset": 0, 00:14:15.216 "data_size": 65536 00:14:15.216 }, 00:14:15.216 { 00:14:15.216 "name": "BaseBdev4", 00:14:15.216 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:15.216 "is_configured": true, 00:14:15.216 "data_offset": 0, 00:14:15.216 "data_size": 65536 00:14:15.216 } 00:14:15.216 ] 00:14:15.216 }' 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.216 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.475 [2024-12-13 08:25:27.594402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.475 [2024-12-13 08:25:27.640623] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.475 [2024-12-13 08:25:27.640778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.475 [2024-12-13 08:25:27.640800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.475 [2024-12-13 08:25:27.640811] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.475 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.475 "name": "raid_bdev1", 00:14:15.475 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:15.475 "strip_size_kb": 0, 00:14:15.475 "state": "online", 00:14:15.476 "raid_level": "raid1", 00:14:15.476 "superblock": false, 00:14:15.476 "num_base_bdevs": 4, 00:14:15.476 "num_base_bdevs_discovered": 3, 00:14:15.476 "num_base_bdevs_operational": 3, 00:14:15.476 "base_bdevs_list": [ 00:14:15.476 { 00:14:15.476 "name": null, 00:14:15.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.476 "is_configured": false, 00:14:15.476 "data_offset": 0, 00:14:15.476 "data_size": 65536 00:14:15.476 }, 00:14:15.476 { 00:14:15.476 "name": "BaseBdev2", 00:14:15.476 "uuid": "0e6ccb56-bbed-5335-84f6-62e64d6d0a49", 00:14:15.476 "is_configured": true, 00:14:15.476 "data_offset": 0, 00:14:15.476 "data_size": 65536 00:14:15.476 }, 00:14:15.476 { 00:14:15.476 "name": "BaseBdev3", 00:14:15.476 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:15.476 "is_configured": true, 00:14:15.476 "data_offset": 0, 00:14:15.476 "data_size": 65536 00:14:15.476 }, 00:14:15.476 { 00:14:15.476 "name": "BaseBdev4", 00:14:15.476 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:15.476 "is_configured": true, 00:14:15.476 "data_offset": 0, 00:14:15.476 "data_size": 65536 00:14:15.476 } 00:14:15.476 ] 00:14:15.476 }' 00:14:15.476 08:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.476 08:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.044 "name": "raid_bdev1", 00:14:16.044 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:16.044 "strip_size_kb": 0, 00:14:16.044 "state": "online", 00:14:16.044 "raid_level": "raid1", 00:14:16.044 "superblock": false, 00:14:16.044 "num_base_bdevs": 4, 00:14:16.044 "num_base_bdevs_discovered": 3, 00:14:16.044 "num_base_bdevs_operational": 3, 00:14:16.044 "base_bdevs_list": [ 00:14:16.044 { 00:14:16.044 "name": null, 00:14:16.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.044 "is_configured": false, 00:14:16.044 "data_offset": 0, 00:14:16.044 "data_size": 65536 00:14:16.044 }, 00:14:16.044 { 00:14:16.044 "name": "BaseBdev2", 00:14:16.044 "uuid": "0e6ccb56-bbed-5335-84f6-62e64d6d0a49", 00:14:16.044 "is_configured": true, 00:14:16.044 "data_offset": 0, 00:14:16.044 "data_size": 65536 00:14:16.044 }, 00:14:16.044 { 00:14:16.044 "name": "BaseBdev3", 00:14:16.044 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:16.044 "is_configured": true, 00:14:16.044 "data_offset": 0, 00:14:16.044 "data_size": 65536 00:14:16.044 }, 00:14:16.044 { 00:14:16.044 "name": "BaseBdev4", 00:14:16.044 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:16.044 "is_configured": true, 00:14:16.044 "data_offset": 0, 00:14:16.044 "data_size": 65536 00:14:16.044 } 00:14:16.044 ] 00:14:16.044 }' 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.044 [2024-12-13 08:25:28.241544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.044 [2024-12-13 08:25:28.256934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.044 08:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:16.044 [2024-12-13 08:25:28.258937] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.984 "name": "raid_bdev1", 00:14:16.984 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:16.984 "strip_size_kb": 0, 00:14:16.984 "state": "online", 00:14:16.984 "raid_level": "raid1", 00:14:16.984 "superblock": false, 00:14:16.984 "num_base_bdevs": 4, 00:14:16.984 "num_base_bdevs_discovered": 4, 00:14:16.984 "num_base_bdevs_operational": 4, 00:14:16.984 "process": { 00:14:16.984 "type": "rebuild", 00:14:16.984 "target": "spare", 00:14:16.984 "progress": { 00:14:16.984 "blocks": 20480, 00:14:16.984 "percent": 31 00:14:16.984 } 00:14:16.984 }, 00:14:16.984 "base_bdevs_list": [ 00:14:16.984 { 00:14:16.984 "name": "spare", 00:14:16.984 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:16.984 "is_configured": true, 00:14:16.984 "data_offset": 0, 00:14:16.984 "data_size": 65536 00:14:16.984 }, 00:14:16.984 { 00:14:16.984 "name": "BaseBdev2", 00:14:16.984 "uuid": "0e6ccb56-bbed-5335-84f6-62e64d6d0a49", 00:14:16.984 "is_configured": true, 00:14:16.984 "data_offset": 0, 00:14:16.984 "data_size": 65536 00:14:16.984 }, 00:14:16.984 { 00:14:16.984 "name": "BaseBdev3", 00:14:16.984 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:16.984 "is_configured": true, 00:14:16.984 "data_offset": 0, 00:14:16.984 "data_size": 65536 00:14:16.984 }, 00:14:16.984 { 00:14:16.984 "name": "BaseBdev4", 00:14:16.984 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:16.984 "is_configured": true, 00:14:16.984 "data_offset": 0, 00:14:16.984 "data_size": 65536 00:14:16.984 } 00:14:16.984 ] 00:14:16.984 }' 00:14:16.984 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.243 [2024-12-13 08:25:29.430028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:17.243 [2024-12-13 08:25:29.464208] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.243 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.244 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.244 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.244 "name": "raid_bdev1", 00:14:17.244 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:17.244 "strip_size_kb": 0, 00:14:17.244 "state": "online", 00:14:17.244 "raid_level": "raid1", 00:14:17.244 "superblock": false, 00:14:17.244 "num_base_bdevs": 4, 00:14:17.244 "num_base_bdevs_discovered": 3, 00:14:17.244 "num_base_bdevs_operational": 3, 00:14:17.244 "process": { 00:14:17.244 "type": "rebuild", 00:14:17.244 "target": "spare", 00:14:17.244 "progress": { 00:14:17.244 "blocks": 24576, 00:14:17.244 "percent": 37 00:14:17.244 } 00:14:17.244 }, 00:14:17.244 "base_bdevs_list": [ 00:14:17.244 { 00:14:17.244 "name": "spare", 00:14:17.244 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:17.244 "is_configured": true, 00:14:17.244 "data_offset": 0, 00:14:17.244 "data_size": 65536 00:14:17.244 }, 00:14:17.244 { 00:14:17.244 "name": null, 00:14:17.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.244 "is_configured": false, 00:14:17.244 "data_offset": 0, 00:14:17.244 "data_size": 65536 00:14:17.244 }, 00:14:17.244 { 00:14:17.244 "name": "BaseBdev3", 00:14:17.244 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:17.244 "is_configured": true, 00:14:17.244 "data_offset": 0, 00:14:17.244 "data_size": 65536 00:14:17.244 }, 00:14:17.244 { 00:14:17.244 "name": "BaseBdev4", 00:14:17.244 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:17.244 "is_configured": true, 00:14:17.244 "data_offset": 0, 00:14:17.244 "data_size": 65536 00:14:17.244 } 00:14:17.244 ] 00:14:17.244 }' 00:14:17.244 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.244 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.244 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.504 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.504 "name": "raid_bdev1", 00:14:17.504 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:17.504 "strip_size_kb": 0, 00:14:17.504 "state": "online", 00:14:17.504 "raid_level": "raid1", 00:14:17.504 "superblock": false, 00:14:17.504 "num_base_bdevs": 4, 00:14:17.504 "num_base_bdevs_discovered": 3, 00:14:17.504 "num_base_bdevs_operational": 3, 00:14:17.504 "process": { 00:14:17.504 "type": "rebuild", 00:14:17.504 "target": "spare", 00:14:17.504 "progress": { 00:14:17.504 "blocks": 26624, 00:14:17.504 "percent": 40 00:14:17.504 } 00:14:17.504 }, 00:14:17.504 "base_bdevs_list": [ 00:14:17.504 { 00:14:17.504 "name": "spare", 00:14:17.504 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:17.505 "is_configured": true, 00:14:17.505 "data_offset": 0, 00:14:17.505 "data_size": 65536 00:14:17.505 }, 00:14:17.505 { 00:14:17.505 "name": null, 00:14:17.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.505 "is_configured": false, 00:14:17.505 "data_offset": 0, 00:14:17.505 "data_size": 65536 00:14:17.505 }, 00:14:17.505 { 00:14:17.505 "name": "BaseBdev3", 00:14:17.505 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:17.505 "is_configured": true, 00:14:17.505 "data_offset": 0, 00:14:17.505 "data_size": 65536 00:14:17.505 }, 00:14:17.505 { 00:14:17.505 "name": "BaseBdev4", 00:14:17.505 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:17.505 "is_configured": true, 00:14:17.505 "data_offset": 0, 00:14:17.505 "data_size": 65536 00:14:17.505 } 00:14:17.505 ] 00:14:17.505 }' 00:14:17.505 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.505 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.505 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.505 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.505 08:25:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.443 08:25:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.702 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.702 "name": "raid_bdev1", 00:14:18.702 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:18.702 "strip_size_kb": 0, 00:14:18.702 "state": "online", 00:14:18.702 "raid_level": "raid1", 00:14:18.702 "superblock": false, 00:14:18.702 "num_base_bdevs": 4, 00:14:18.702 "num_base_bdevs_discovered": 3, 00:14:18.702 "num_base_bdevs_operational": 3, 00:14:18.702 "process": { 00:14:18.702 "type": "rebuild", 00:14:18.702 "target": "spare", 00:14:18.702 "progress": { 00:14:18.702 "blocks": 51200, 00:14:18.702 "percent": 78 00:14:18.702 } 00:14:18.702 }, 00:14:18.702 "base_bdevs_list": [ 00:14:18.702 { 00:14:18.702 "name": "spare", 00:14:18.702 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:18.702 "is_configured": true, 00:14:18.703 "data_offset": 0, 00:14:18.703 "data_size": 65536 00:14:18.703 }, 00:14:18.703 { 00:14:18.703 "name": null, 00:14:18.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.703 "is_configured": false, 00:14:18.703 "data_offset": 0, 00:14:18.703 "data_size": 65536 00:14:18.703 }, 00:14:18.703 { 00:14:18.703 "name": "BaseBdev3", 00:14:18.703 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:18.703 "is_configured": true, 00:14:18.703 "data_offset": 0, 00:14:18.703 "data_size": 65536 00:14:18.703 }, 00:14:18.703 { 00:14:18.703 "name": "BaseBdev4", 00:14:18.703 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:18.703 "is_configured": true, 00:14:18.703 "data_offset": 0, 00:14:18.703 "data_size": 65536 00:14:18.703 } 00:14:18.703 ] 00:14:18.703 }' 00:14:18.703 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.703 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.703 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.703 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.703 08:25:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.272 [2024-12-13 08:25:31.473954] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:19.272 [2024-12-13 08:25:31.474160] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:19.272 [2024-12-13 08:25:31.474215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.841 "name": "raid_bdev1", 00:14:19.841 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:19.841 "strip_size_kb": 0, 00:14:19.841 "state": "online", 00:14:19.841 "raid_level": "raid1", 00:14:19.841 "superblock": false, 00:14:19.841 "num_base_bdevs": 4, 00:14:19.841 "num_base_bdevs_discovered": 3, 00:14:19.841 "num_base_bdevs_operational": 3, 00:14:19.841 "base_bdevs_list": [ 00:14:19.841 { 00:14:19.841 "name": "spare", 00:14:19.841 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:19.841 "is_configured": true, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 }, 00:14:19.841 { 00:14:19.841 "name": null, 00:14:19.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.841 "is_configured": false, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 }, 00:14:19.841 { 00:14:19.841 "name": "BaseBdev3", 00:14:19.841 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:19.841 "is_configured": true, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 }, 00:14:19.841 { 00:14:19.841 "name": "BaseBdev4", 00:14:19.841 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:19.841 "is_configured": true, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 } 00:14:19.841 ] 00:14:19.841 }' 00:14:19.841 08:25:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.841 "name": "raid_bdev1", 00:14:19.841 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:19.841 "strip_size_kb": 0, 00:14:19.841 "state": "online", 00:14:19.841 "raid_level": "raid1", 00:14:19.841 "superblock": false, 00:14:19.841 "num_base_bdevs": 4, 00:14:19.841 "num_base_bdevs_discovered": 3, 00:14:19.841 "num_base_bdevs_operational": 3, 00:14:19.841 "base_bdevs_list": [ 00:14:19.841 { 00:14:19.841 "name": "spare", 00:14:19.841 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:19.841 "is_configured": true, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 }, 00:14:19.841 { 00:14:19.841 "name": null, 00:14:19.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.841 "is_configured": false, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 }, 00:14:19.841 { 00:14:19.841 "name": "BaseBdev3", 00:14:19.841 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:19.841 "is_configured": true, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 }, 00:14:19.841 { 00:14:19.841 "name": "BaseBdev4", 00:14:19.841 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:19.841 "is_configured": true, 00:14:19.841 "data_offset": 0, 00:14:19.841 "data_size": 65536 00:14:19.841 } 00:14:19.841 ] 00:14:19.841 }' 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.841 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.102 "name": "raid_bdev1", 00:14:20.102 "uuid": "3f5428b6-f027-4145-b9c4-34ce1fd3b44b", 00:14:20.102 "strip_size_kb": 0, 00:14:20.102 "state": "online", 00:14:20.102 "raid_level": "raid1", 00:14:20.102 "superblock": false, 00:14:20.102 "num_base_bdevs": 4, 00:14:20.102 "num_base_bdevs_discovered": 3, 00:14:20.102 "num_base_bdevs_operational": 3, 00:14:20.102 "base_bdevs_list": [ 00:14:20.102 { 00:14:20.102 "name": "spare", 00:14:20.102 "uuid": "c7e0b388-3684-5d18-8719-b3923d1ab3af", 00:14:20.102 "is_configured": true, 00:14:20.102 "data_offset": 0, 00:14:20.102 "data_size": 65536 00:14:20.102 }, 00:14:20.102 { 00:14:20.102 "name": null, 00:14:20.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.102 "is_configured": false, 00:14:20.102 "data_offset": 0, 00:14:20.102 "data_size": 65536 00:14:20.102 }, 00:14:20.102 { 00:14:20.102 "name": "BaseBdev3", 00:14:20.102 "uuid": "352b3d0e-58f7-5919-b4b8-83263254e3d2", 00:14:20.102 "is_configured": true, 00:14:20.102 "data_offset": 0, 00:14:20.102 "data_size": 65536 00:14:20.102 }, 00:14:20.102 { 00:14:20.102 "name": "BaseBdev4", 00:14:20.102 "uuid": "04ca9093-6882-521a-bdb1-f38fe1f292fe", 00:14:20.102 "is_configured": true, 00:14:20.102 "data_offset": 0, 00:14:20.102 "data_size": 65536 00:14:20.102 } 00:14:20.102 ] 00:14:20.102 }' 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.102 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.362 [2024-12-13 08:25:32.652093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.362 [2024-12-13 08:25:32.652143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.362 [2024-12-13 08:25:32.652237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.362 [2024-12-13 08:25:32.652341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.362 [2024-12-13 08:25:32.652351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.362 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:20.643 /dev/nbd0 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.643 1+0 records in 00:14:20.643 1+0 records out 00:14:20.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575507 s, 7.1 MB/s 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.643 08:25:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:20.920 /dev/nbd1 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.920 1+0 records in 00:14:20.920 1+0 records out 00:14:20.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343962 s, 11.9 MB/s 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.920 08:25:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:21.180 08:25:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:21.180 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.180 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:21.180 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.180 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:21.180 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.180 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.439 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77715 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77715 ']' 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77715 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77715 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.700 killing process with pid 77715 00:14:21.700 Received shutdown signal, test time was about 60.000000 seconds 00:14:21.700 00:14:21.700 Latency(us) 00:14:21.700 [2024-12-13T08:25:34.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.700 [2024-12-13T08:25:34.065Z] =================================================================================================================== 00:14:21.700 [2024-12-13T08:25:34.065Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77715' 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77715 00:14:21.700 [2024-12-13 08:25:33.924626] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.700 08:25:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77715 00:14:22.270 [2024-12-13 08:25:34.427808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.226 08:25:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:23.226 00:14:23.226 real 0m17.583s 00:14:23.226 user 0m19.718s 00:14:23.226 sys 0m3.072s 00:14:23.226 08:25:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.226 ************************************ 00:14:23.226 END TEST raid_rebuild_test 00:14:23.226 ************************************ 00:14:23.226 08:25:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.486 08:25:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:23.486 08:25:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:23.486 08:25:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.486 08:25:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.486 ************************************ 00:14:23.486 START TEST raid_rebuild_test_sb 00:14:23.486 ************************************ 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78158 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78158 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78158 ']' 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.486 08:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.486 [2024-12-13 08:25:35.744240] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:14:23.486 [2024-12-13 08:25:35.744485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:23.486 Zero copy mechanism will not be used. 00:14:23.486 -allocations --file-prefix=spdk_pid78158 ] 00:14:23.746 [2024-12-13 08:25:35.918957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.746 [2024-12-13 08:25:36.040132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.007 [2024-12-13 08:25:36.246435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.007 [2024-12-13 08:25:36.246592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.267 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.267 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:24.267 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.267 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:24.267 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.267 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 BaseBdev1_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 [2024-12-13 08:25:36.663063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:24.527 [2024-12-13 08:25:36.663226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.527 [2024-12-13 08:25:36.663293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:24.527 [2024-12-13 08:25:36.663338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.527 [2024-12-13 08:25:36.665683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.527 [2024-12-13 08:25:36.665762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:24.527 BaseBdev1 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 BaseBdev2_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 [2024-12-13 08:25:36.718273] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:24.527 [2024-12-13 08:25:36.718346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.527 [2024-12-13 08:25:36.718367] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:24.527 [2024-12-13 08:25:36.718378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.527 [2024-12-13 08:25:36.720561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.527 [2024-12-13 08:25:36.720655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:24.527 BaseBdev2 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 BaseBdev3_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 [2024-12-13 08:25:36.790380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:24.527 [2024-12-13 08:25:36.790446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.527 [2024-12-13 08:25:36.790469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:24.527 [2024-12-13 08:25:36.790480] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.527 [2024-12-13 08:25:36.792633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.527 [2024-12-13 08:25:36.792767] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:24.527 BaseBdev3 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 BaseBdev4_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.527 [2024-12-13 08:25:36.845827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:24.527 [2024-12-13 08:25:36.845913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.527 [2024-12-13 08:25:36.845940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:24.527 [2024-12-13 08:25:36.845953] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.527 [2024-12-13 08:25:36.848208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.527 [2024-12-13 08:25:36.848305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:24.527 BaseBdev4 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.527 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.787 spare_malloc 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.787 spare_delay 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.787 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.788 [2024-12-13 08:25:36.915018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.788 [2024-12-13 08:25:36.915174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.788 [2024-12-13 08:25:36.915206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:24.788 [2024-12-13 08:25:36.915220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.788 [2024-12-13 08:25:36.917749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.788 [2024-12-13 08:25:36.917796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.788 spare 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.788 [2024-12-13 08:25:36.927048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.788 [2024-12-13 08:25:36.929258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.788 [2024-12-13 08:25:36.929330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.788 [2024-12-13 08:25:36.929388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:24.788 [2024-12-13 08:25:36.929610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:24.788 [2024-12-13 08:25:36.929626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.788 [2024-12-13 08:25:36.929929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:24.788 [2024-12-13 08:25:36.930161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:24.788 [2024-12-13 08:25:36.930175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:24.788 [2024-12-13 08:25:36.930376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.788 "name": "raid_bdev1", 00:14:24.788 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:24.788 "strip_size_kb": 0, 00:14:24.788 "state": "online", 00:14:24.788 "raid_level": "raid1", 00:14:24.788 "superblock": true, 00:14:24.788 "num_base_bdevs": 4, 00:14:24.788 "num_base_bdevs_discovered": 4, 00:14:24.788 "num_base_bdevs_operational": 4, 00:14:24.788 "base_bdevs_list": [ 00:14:24.788 { 00:14:24.788 "name": "BaseBdev1", 00:14:24.788 "uuid": "927ce6bf-068b-5901-9910-cbbcde82b849", 00:14:24.788 "is_configured": true, 00:14:24.788 "data_offset": 2048, 00:14:24.788 "data_size": 63488 00:14:24.788 }, 00:14:24.788 { 00:14:24.788 "name": "BaseBdev2", 00:14:24.788 "uuid": "09208db5-05e7-567f-8f26-0decc8d0644f", 00:14:24.788 "is_configured": true, 00:14:24.788 "data_offset": 2048, 00:14:24.788 "data_size": 63488 00:14:24.788 }, 00:14:24.788 { 00:14:24.788 "name": "BaseBdev3", 00:14:24.788 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:24.788 "is_configured": true, 00:14:24.788 "data_offset": 2048, 00:14:24.788 "data_size": 63488 00:14:24.788 }, 00:14:24.788 { 00:14:24.788 "name": "BaseBdev4", 00:14:24.788 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:24.788 "is_configured": true, 00:14:24.788 "data_offset": 2048, 00:14:24.788 "data_size": 63488 00:14:24.788 } 00:14:24.788 ] 00:14:24.788 }' 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.788 08:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.047 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.047 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:25.047 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.047 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.047 [2024-12-13 08:25:37.398608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.307 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:25.567 [2024-12-13 08:25:37.677774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:25.567 /dev/nbd0 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.567 1+0 records in 00:14:25.567 1+0 records out 00:14:25.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344558 s, 11.9 MB/s 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:25.567 08:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:32.172 63488+0 records in 00:14:32.172 63488+0 records out 00:14:32.172 32505856 bytes (33 MB, 31 MiB) copied, 5.52074 s, 5.9 MB/s 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.172 [2024-12-13 08:25:43.471993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.172 [2024-12-13 08:25:43.508026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.172 "name": "raid_bdev1", 00:14:32.172 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:32.172 "strip_size_kb": 0, 00:14:32.172 "state": "online", 00:14:32.172 "raid_level": "raid1", 00:14:32.172 "superblock": true, 00:14:32.172 "num_base_bdevs": 4, 00:14:32.172 "num_base_bdevs_discovered": 3, 00:14:32.172 "num_base_bdevs_operational": 3, 00:14:32.172 "base_bdevs_list": [ 00:14:32.172 { 00:14:32.172 "name": null, 00:14:32.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.172 "is_configured": false, 00:14:32.172 "data_offset": 0, 00:14:32.172 "data_size": 63488 00:14:32.172 }, 00:14:32.172 { 00:14:32.172 "name": "BaseBdev2", 00:14:32.172 "uuid": "09208db5-05e7-567f-8f26-0decc8d0644f", 00:14:32.172 "is_configured": true, 00:14:32.172 "data_offset": 2048, 00:14:32.172 "data_size": 63488 00:14:32.172 }, 00:14:32.172 { 00:14:32.172 "name": "BaseBdev3", 00:14:32.172 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:32.172 "is_configured": true, 00:14:32.172 "data_offset": 2048, 00:14:32.172 "data_size": 63488 00:14:32.172 }, 00:14:32.172 { 00:14:32.172 "name": "BaseBdev4", 00:14:32.172 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:32.172 "is_configured": true, 00:14:32.172 "data_offset": 2048, 00:14:32.172 "data_size": 63488 00:14:32.172 } 00:14:32.172 ] 00:14:32.172 }' 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.172 [2024-12-13 08:25:43.975282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.172 [2024-12-13 08:25:43.990264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.172 08:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:32.172 [2024-12-13 08:25:43.992261] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.740 08:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.740 08:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.740 08:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.740 08:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.740 08:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.740 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.740 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.740 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.740 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.740 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.740 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.740 "name": "raid_bdev1", 00:14:32.740 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:32.740 "strip_size_kb": 0, 00:14:32.740 "state": "online", 00:14:32.740 "raid_level": "raid1", 00:14:32.740 "superblock": true, 00:14:32.740 "num_base_bdevs": 4, 00:14:32.740 "num_base_bdevs_discovered": 4, 00:14:32.740 "num_base_bdevs_operational": 4, 00:14:32.740 "process": { 00:14:32.740 "type": "rebuild", 00:14:32.740 "target": "spare", 00:14:32.740 "progress": { 00:14:32.740 "blocks": 20480, 00:14:32.740 "percent": 32 00:14:32.740 } 00:14:32.740 }, 00:14:32.740 "base_bdevs_list": [ 00:14:32.740 { 00:14:32.740 "name": "spare", 00:14:32.740 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:32.740 "is_configured": true, 00:14:32.740 "data_offset": 2048, 00:14:32.740 "data_size": 63488 00:14:32.740 }, 00:14:32.740 { 00:14:32.740 "name": "BaseBdev2", 00:14:32.740 "uuid": "09208db5-05e7-567f-8f26-0decc8d0644f", 00:14:32.740 "is_configured": true, 00:14:32.740 "data_offset": 2048, 00:14:32.740 "data_size": 63488 00:14:32.740 }, 00:14:32.740 { 00:14:32.740 "name": "BaseBdev3", 00:14:32.740 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:32.741 "is_configured": true, 00:14:32.741 "data_offset": 2048, 00:14:32.741 "data_size": 63488 00:14:32.741 }, 00:14:32.741 { 00:14:32.741 "name": "BaseBdev4", 00:14:32.741 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:32.741 "is_configured": true, 00:14:32.741 "data_offset": 2048, 00:14:32.741 "data_size": 63488 00:14:32.741 } 00:14:32.741 ] 00:14:32.741 }' 00:14:32.741 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.741 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.741 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.999 [2024-12-13 08:25:45.131951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.999 [2024-12-13 08:25:45.197573] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:32.999 [2024-12-13 08:25:45.197642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.999 [2024-12-13 08:25:45.197660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.999 [2024-12-13 08:25:45.197670] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.999 "name": "raid_bdev1", 00:14:32.999 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:32.999 "strip_size_kb": 0, 00:14:32.999 "state": "online", 00:14:32.999 "raid_level": "raid1", 00:14:32.999 "superblock": true, 00:14:32.999 "num_base_bdevs": 4, 00:14:32.999 "num_base_bdevs_discovered": 3, 00:14:32.999 "num_base_bdevs_operational": 3, 00:14:32.999 "base_bdevs_list": [ 00:14:32.999 { 00:14:32.999 "name": null, 00:14:32.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.999 "is_configured": false, 00:14:32.999 "data_offset": 0, 00:14:32.999 "data_size": 63488 00:14:32.999 }, 00:14:32.999 { 00:14:32.999 "name": "BaseBdev2", 00:14:32.999 "uuid": "09208db5-05e7-567f-8f26-0decc8d0644f", 00:14:32.999 "is_configured": true, 00:14:32.999 "data_offset": 2048, 00:14:32.999 "data_size": 63488 00:14:32.999 }, 00:14:32.999 { 00:14:32.999 "name": "BaseBdev3", 00:14:32.999 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:32.999 "is_configured": true, 00:14:32.999 "data_offset": 2048, 00:14:32.999 "data_size": 63488 00:14:32.999 }, 00:14:32.999 { 00:14:32.999 "name": "BaseBdev4", 00:14:32.999 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:32.999 "is_configured": true, 00:14:32.999 "data_offset": 2048, 00:14:32.999 "data_size": 63488 00:14:32.999 } 00:14:32.999 ] 00:14:32.999 }' 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.999 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.587 "name": "raid_bdev1", 00:14:33.587 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:33.587 "strip_size_kb": 0, 00:14:33.587 "state": "online", 00:14:33.587 "raid_level": "raid1", 00:14:33.587 "superblock": true, 00:14:33.587 "num_base_bdevs": 4, 00:14:33.587 "num_base_bdevs_discovered": 3, 00:14:33.587 "num_base_bdevs_operational": 3, 00:14:33.587 "base_bdevs_list": [ 00:14:33.587 { 00:14:33.587 "name": null, 00:14:33.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.587 "is_configured": false, 00:14:33.587 "data_offset": 0, 00:14:33.587 "data_size": 63488 00:14:33.587 }, 00:14:33.587 { 00:14:33.587 "name": "BaseBdev2", 00:14:33.587 "uuid": "09208db5-05e7-567f-8f26-0decc8d0644f", 00:14:33.587 "is_configured": true, 00:14:33.587 "data_offset": 2048, 00:14:33.587 "data_size": 63488 00:14:33.587 }, 00:14:33.587 { 00:14:33.587 "name": "BaseBdev3", 00:14:33.587 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:33.587 "is_configured": true, 00:14:33.587 "data_offset": 2048, 00:14:33.587 "data_size": 63488 00:14:33.587 }, 00:14:33.587 { 00:14:33.587 "name": "BaseBdev4", 00:14:33.587 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:33.587 "is_configured": true, 00:14:33.587 "data_offset": 2048, 00:14:33.587 "data_size": 63488 00:14:33.587 } 00:14:33.587 ] 00:14:33.587 }' 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.587 [2024-12-13 08:25:45.758318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.587 [2024-12-13 08:25:45.773236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.587 08:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:33.588 [2024-12-13 08:25:45.775195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.525 "name": "raid_bdev1", 00:14:34.525 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:34.525 "strip_size_kb": 0, 00:14:34.525 "state": "online", 00:14:34.525 "raid_level": "raid1", 00:14:34.525 "superblock": true, 00:14:34.525 "num_base_bdevs": 4, 00:14:34.525 "num_base_bdevs_discovered": 4, 00:14:34.525 "num_base_bdevs_operational": 4, 00:14:34.525 "process": { 00:14:34.525 "type": "rebuild", 00:14:34.525 "target": "spare", 00:14:34.525 "progress": { 00:14:34.525 "blocks": 20480, 00:14:34.525 "percent": 32 00:14:34.525 } 00:14:34.525 }, 00:14:34.525 "base_bdevs_list": [ 00:14:34.525 { 00:14:34.525 "name": "spare", 00:14:34.525 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:34.525 "is_configured": true, 00:14:34.525 "data_offset": 2048, 00:14:34.525 "data_size": 63488 00:14:34.525 }, 00:14:34.525 { 00:14:34.525 "name": "BaseBdev2", 00:14:34.525 "uuid": "09208db5-05e7-567f-8f26-0decc8d0644f", 00:14:34.525 "is_configured": true, 00:14:34.525 "data_offset": 2048, 00:14:34.525 "data_size": 63488 00:14:34.525 }, 00:14:34.525 { 00:14:34.525 "name": "BaseBdev3", 00:14:34.525 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:34.525 "is_configured": true, 00:14:34.525 "data_offset": 2048, 00:14:34.525 "data_size": 63488 00:14:34.525 }, 00:14:34.525 { 00:14:34.525 "name": "BaseBdev4", 00:14:34.525 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:34.525 "is_configured": true, 00:14:34.525 "data_offset": 2048, 00:14:34.525 "data_size": 63488 00:14:34.525 } 00:14:34.525 ] 00:14:34.525 }' 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.525 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:34.784 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.784 08:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.784 [2024-12-13 08:25:46.938852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.784 [2024-12-13 08:25:47.080483] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.784 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.784 "name": "raid_bdev1", 00:14:34.784 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:34.784 "strip_size_kb": 0, 00:14:34.784 "state": "online", 00:14:34.784 "raid_level": "raid1", 00:14:34.784 "superblock": true, 00:14:34.784 "num_base_bdevs": 4, 00:14:34.784 "num_base_bdevs_discovered": 3, 00:14:34.784 "num_base_bdevs_operational": 3, 00:14:34.784 "process": { 00:14:34.784 "type": "rebuild", 00:14:34.784 "target": "spare", 00:14:34.784 "progress": { 00:14:34.784 "blocks": 24576, 00:14:34.784 "percent": 38 00:14:34.784 } 00:14:34.784 }, 00:14:34.784 "base_bdevs_list": [ 00:14:34.784 { 00:14:34.784 "name": "spare", 00:14:34.784 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:34.784 "is_configured": true, 00:14:34.784 "data_offset": 2048, 00:14:34.784 "data_size": 63488 00:14:34.784 }, 00:14:34.784 { 00:14:34.784 "name": null, 00:14:34.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.784 "is_configured": false, 00:14:34.784 "data_offset": 0, 00:14:34.784 "data_size": 63488 00:14:34.784 }, 00:14:34.784 { 00:14:34.784 "name": "BaseBdev3", 00:14:34.784 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:34.784 "is_configured": true, 00:14:34.784 "data_offset": 2048, 00:14:34.784 "data_size": 63488 00:14:34.784 }, 00:14:34.784 { 00:14:34.784 "name": "BaseBdev4", 00:14:34.784 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:34.785 "is_configured": true, 00:14:34.785 "data_offset": 2048, 00:14:34.785 "data_size": 63488 00:14:34.785 } 00:14:34.785 ] 00:14:34.785 }' 00:14:34.785 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.043 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.043 "name": "raid_bdev1", 00:14:35.043 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:35.043 "strip_size_kb": 0, 00:14:35.043 "state": "online", 00:14:35.043 "raid_level": "raid1", 00:14:35.043 "superblock": true, 00:14:35.043 "num_base_bdevs": 4, 00:14:35.043 "num_base_bdevs_discovered": 3, 00:14:35.043 "num_base_bdevs_operational": 3, 00:14:35.043 "process": { 00:14:35.043 "type": "rebuild", 00:14:35.043 "target": "spare", 00:14:35.043 "progress": { 00:14:35.043 "blocks": 26624, 00:14:35.043 "percent": 41 00:14:35.043 } 00:14:35.043 }, 00:14:35.043 "base_bdevs_list": [ 00:14:35.043 { 00:14:35.043 "name": "spare", 00:14:35.043 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:35.043 "is_configured": true, 00:14:35.043 "data_offset": 2048, 00:14:35.043 "data_size": 63488 00:14:35.043 }, 00:14:35.043 { 00:14:35.043 "name": null, 00:14:35.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.044 "is_configured": false, 00:14:35.044 "data_offset": 0, 00:14:35.044 "data_size": 63488 00:14:35.044 }, 00:14:35.044 { 00:14:35.044 "name": "BaseBdev3", 00:14:35.044 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:35.044 "is_configured": true, 00:14:35.044 "data_offset": 2048, 00:14:35.044 "data_size": 63488 00:14:35.044 }, 00:14:35.044 { 00:14:35.044 "name": "BaseBdev4", 00:14:35.044 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:35.044 "is_configured": true, 00:14:35.044 "data_offset": 2048, 00:14:35.044 "data_size": 63488 00:14:35.044 } 00:14:35.044 ] 00:14:35.044 }' 00:14:35.044 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.044 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.044 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.044 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.044 08:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.424 "name": "raid_bdev1", 00:14:36.424 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:36.424 "strip_size_kb": 0, 00:14:36.424 "state": "online", 00:14:36.424 "raid_level": "raid1", 00:14:36.424 "superblock": true, 00:14:36.424 "num_base_bdevs": 4, 00:14:36.424 "num_base_bdevs_discovered": 3, 00:14:36.424 "num_base_bdevs_operational": 3, 00:14:36.424 "process": { 00:14:36.424 "type": "rebuild", 00:14:36.424 "target": "spare", 00:14:36.424 "progress": { 00:14:36.424 "blocks": 51200, 00:14:36.424 "percent": 80 00:14:36.424 } 00:14:36.424 }, 00:14:36.424 "base_bdevs_list": [ 00:14:36.424 { 00:14:36.424 "name": "spare", 00:14:36.424 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:36.424 "is_configured": true, 00:14:36.424 "data_offset": 2048, 00:14:36.424 "data_size": 63488 00:14:36.424 }, 00:14:36.424 { 00:14:36.424 "name": null, 00:14:36.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.424 "is_configured": false, 00:14:36.424 "data_offset": 0, 00:14:36.424 "data_size": 63488 00:14:36.424 }, 00:14:36.424 { 00:14:36.424 "name": "BaseBdev3", 00:14:36.424 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:36.424 "is_configured": true, 00:14:36.424 "data_offset": 2048, 00:14:36.424 "data_size": 63488 00:14:36.424 }, 00:14:36.424 { 00:14:36.424 "name": "BaseBdev4", 00:14:36.424 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:36.424 "is_configured": true, 00:14:36.424 "data_offset": 2048, 00:14:36.424 "data_size": 63488 00:14:36.424 } 00:14:36.424 ] 00:14:36.424 }' 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.424 08:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.683 [2024-12-13 08:25:48.989411] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:36.683 [2024-12-13 08:25:48.989496] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:36.683 [2024-12-13 08:25:48.989636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.251 "name": "raid_bdev1", 00:14:37.251 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:37.251 "strip_size_kb": 0, 00:14:37.251 "state": "online", 00:14:37.251 "raid_level": "raid1", 00:14:37.251 "superblock": true, 00:14:37.251 "num_base_bdevs": 4, 00:14:37.251 "num_base_bdevs_discovered": 3, 00:14:37.251 "num_base_bdevs_operational": 3, 00:14:37.251 "base_bdevs_list": [ 00:14:37.251 { 00:14:37.251 "name": "spare", 00:14:37.251 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:37.251 "is_configured": true, 00:14:37.251 "data_offset": 2048, 00:14:37.251 "data_size": 63488 00:14:37.251 }, 00:14:37.251 { 00:14:37.251 "name": null, 00:14:37.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.251 "is_configured": false, 00:14:37.251 "data_offset": 0, 00:14:37.251 "data_size": 63488 00:14:37.251 }, 00:14:37.251 { 00:14:37.251 "name": "BaseBdev3", 00:14:37.251 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:37.251 "is_configured": true, 00:14:37.251 "data_offset": 2048, 00:14:37.251 "data_size": 63488 00:14:37.251 }, 00:14:37.251 { 00:14:37.251 "name": "BaseBdev4", 00:14:37.251 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:37.251 "is_configured": true, 00:14:37.251 "data_offset": 2048, 00:14:37.251 "data_size": 63488 00:14:37.251 } 00:14:37.251 ] 00:14:37.251 }' 00:14:37.251 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.511 "name": "raid_bdev1", 00:14:37.511 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:37.511 "strip_size_kb": 0, 00:14:37.511 "state": "online", 00:14:37.511 "raid_level": "raid1", 00:14:37.511 "superblock": true, 00:14:37.511 "num_base_bdevs": 4, 00:14:37.511 "num_base_bdevs_discovered": 3, 00:14:37.511 "num_base_bdevs_operational": 3, 00:14:37.511 "base_bdevs_list": [ 00:14:37.511 { 00:14:37.511 "name": "spare", 00:14:37.511 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:37.511 "is_configured": true, 00:14:37.511 "data_offset": 2048, 00:14:37.511 "data_size": 63488 00:14:37.511 }, 00:14:37.511 { 00:14:37.511 "name": null, 00:14:37.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.511 "is_configured": false, 00:14:37.511 "data_offset": 0, 00:14:37.511 "data_size": 63488 00:14:37.511 }, 00:14:37.511 { 00:14:37.511 "name": "BaseBdev3", 00:14:37.511 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:37.511 "is_configured": true, 00:14:37.511 "data_offset": 2048, 00:14:37.511 "data_size": 63488 00:14:37.511 }, 00:14:37.511 { 00:14:37.511 "name": "BaseBdev4", 00:14:37.511 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:37.511 "is_configured": true, 00:14:37.511 "data_offset": 2048, 00:14:37.511 "data_size": 63488 00:14:37.511 } 00:14:37.511 ] 00:14:37.511 }' 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.511 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.771 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.771 "name": "raid_bdev1", 00:14:37.771 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:37.771 "strip_size_kb": 0, 00:14:37.771 "state": "online", 00:14:37.771 "raid_level": "raid1", 00:14:37.771 "superblock": true, 00:14:37.772 "num_base_bdevs": 4, 00:14:37.772 "num_base_bdevs_discovered": 3, 00:14:37.772 "num_base_bdevs_operational": 3, 00:14:37.772 "base_bdevs_list": [ 00:14:37.772 { 00:14:37.772 "name": "spare", 00:14:37.772 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:37.772 "is_configured": true, 00:14:37.772 "data_offset": 2048, 00:14:37.772 "data_size": 63488 00:14:37.772 }, 00:14:37.772 { 00:14:37.772 "name": null, 00:14:37.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.772 "is_configured": false, 00:14:37.772 "data_offset": 0, 00:14:37.772 "data_size": 63488 00:14:37.772 }, 00:14:37.772 { 00:14:37.772 "name": "BaseBdev3", 00:14:37.772 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:37.772 "is_configured": true, 00:14:37.772 "data_offset": 2048, 00:14:37.772 "data_size": 63488 00:14:37.772 }, 00:14:37.772 { 00:14:37.772 "name": "BaseBdev4", 00:14:37.772 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:37.772 "is_configured": true, 00:14:37.772 "data_offset": 2048, 00:14:37.772 "data_size": 63488 00:14:37.772 } 00:14:37.772 ] 00:14:37.772 }' 00:14:37.772 08:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.772 08:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.031 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.031 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.031 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.031 [2024-12-13 08:25:50.349523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.031 [2024-12-13 08:25:50.349629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.031 [2024-12-13 08:25:50.349765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.031 [2024-12-13 08:25:50.349887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.031 [2024-12-13 08:25:50.349938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:38.031 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.031 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.032 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:38.297 /dev/nbd0 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.297 1+0 records in 00:14:38.297 1+0 records out 00:14:38.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476309 s, 8.6 MB/s 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.297 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:38.563 /dev/nbd1 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.563 1+0 records in 00:14:38.563 1+0 records out 00:14:38.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378321 s, 10.8 MB/s 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.563 08:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:38.822 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:38.822 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.822 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.822 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.822 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:38.822 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.822 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.081 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.081 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.081 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.082 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.082 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.082 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.082 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.082 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.082 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.082 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.341 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.341 [2024-12-13 08:25:51.506543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.341 [2024-12-13 08:25:51.506601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.341 [2024-12-13 08:25:51.506623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:39.342 [2024-12-13 08:25:51.506632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.342 [2024-12-13 08:25:51.508877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.342 [2024-12-13 08:25:51.508916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.342 [2024-12-13 08:25:51.509015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:39.342 [2024-12-13 08:25:51.509073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.342 [2024-12-13 08:25:51.509234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.342 [2024-12-13 08:25:51.509321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.342 spare 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.342 [2024-12-13 08:25:51.609213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:39.342 [2024-12-13 08:25:51.609241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.342 [2024-12-13 08:25:51.609546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:39.342 [2024-12-13 08:25:51.609719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:39.342 [2024-12-13 08:25:51.609731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:39.342 [2024-12-13 08:25:51.609904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.342 "name": "raid_bdev1", 00:14:39.342 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:39.342 "strip_size_kb": 0, 00:14:39.342 "state": "online", 00:14:39.342 "raid_level": "raid1", 00:14:39.342 "superblock": true, 00:14:39.342 "num_base_bdevs": 4, 00:14:39.342 "num_base_bdevs_discovered": 3, 00:14:39.342 "num_base_bdevs_operational": 3, 00:14:39.342 "base_bdevs_list": [ 00:14:39.342 { 00:14:39.342 "name": "spare", 00:14:39.342 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:39.342 "is_configured": true, 00:14:39.342 "data_offset": 2048, 00:14:39.342 "data_size": 63488 00:14:39.342 }, 00:14:39.342 { 00:14:39.342 "name": null, 00:14:39.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.342 "is_configured": false, 00:14:39.342 "data_offset": 2048, 00:14:39.342 "data_size": 63488 00:14:39.342 }, 00:14:39.342 { 00:14:39.342 "name": "BaseBdev3", 00:14:39.342 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:39.342 "is_configured": true, 00:14:39.342 "data_offset": 2048, 00:14:39.342 "data_size": 63488 00:14:39.342 }, 00:14:39.342 { 00:14:39.342 "name": "BaseBdev4", 00:14:39.342 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:39.342 "is_configured": true, 00:14:39.342 "data_offset": 2048, 00:14:39.342 "data_size": 63488 00:14:39.342 } 00:14:39.342 ] 00:14:39.342 }' 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.342 08:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.912 "name": "raid_bdev1", 00:14:39.912 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:39.912 "strip_size_kb": 0, 00:14:39.912 "state": "online", 00:14:39.912 "raid_level": "raid1", 00:14:39.912 "superblock": true, 00:14:39.912 "num_base_bdevs": 4, 00:14:39.912 "num_base_bdevs_discovered": 3, 00:14:39.912 "num_base_bdevs_operational": 3, 00:14:39.912 "base_bdevs_list": [ 00:14:39.912 { 00:14:39.912 "name": "spare", 00:14:39.912 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:39.912 "is_configured": true, 00:14:39.912 "data_offset": 2048, 00:14:39.912 "data_size": 63488 00:14:39.912 }, 00:14:39.912 { 00:14:39.912 "name": null, 00:14:39.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.912 "is_configured": false, 00:14:39.912 "data_offset": 2048, 00:14:39.912 "data_size": 63488 00:14:39.912 }, 00:14:39.912 { 00:14:39.912 "name": "BaseBdev3", 00:14:39.912 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:39.912 "is_configured": true, 00:14:39.912 "data_offset": 2048, 00:14:39.912 "data_size": 63488 00:14:39.912 }, 00:14:39.912 { 00:14:39.912 "name": "BaseBdev4", 00:14:39.912 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:39.912 "is_configured": true, 00:14:39.912 "data_offset": 2048, 00:14:39.912 "data_size": 63488 00:14:39.912 } 00:14:39.912 ] 00:14:39.912 }' 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.912 [2024-12-13 08:25:52.253367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.912 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.172 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.172 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.172 "name": "raid_bdev1", 00:14:40.172 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:40.172 "strip_size_kb": 0, 00:14:40.172 "state": "online", 00:14:40.172 "raid_level": "raid1", 00:14:40.172 "superblock": true, 00:14:40.172 "num_base_bdevs": 4, 00:14:40.172 "num_base_bdevs_discovered": 2, 00:14:40.172 "num_base_bdevs_operational": 2, 00:14:40.172 "base_bdevs_list": [ 00:14:40.172 { 00:14:40.172 "name": null, 00:14:40.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.172 "is_configured": false, 00:14:40.172 "data_offset": 0, 00:14:40.172 "data_size": 63488 00:14:40.172 }, 00:14:40.172 { 00:14:40.172 "name": null, 00:14:40.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.172 "is_configured": false, 00:14:40.172 "data_offset": 2048, 00:14:40.172 "data_size": 63488 00:14:40.172 }, 00:14:40.172 { 00:14:40.172 "name": "BaseBdev3", 00:14:40.172 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:40.172 "is_configured": true, 00:14:40.172 "data_offset": 2048, 00:14:40.172 "data_size": 63488 00:14:40.172 }, 00:14:40.172 { 00:14:40.172 "name": "BaseBdev4", 00:14:40.172 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:40.172 "is_configured": true, 00:14:40.172 "data_offset": 2048, 00:14:40.172 "data_size": 63488 00:14:40.172 } 00:14:40.172 ] 00:14:40.172 }' 00:14:40.172 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.172 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.432 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.432 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.432 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.433 [2024-12-13 08:25:52.636729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.433 [2024-12-13 08:25:52.636941] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:40.433 [2024-12-13 08:25:52.636956] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:40.433 [2024-12-13 08:25:52.637000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.433 [2024-12-13 08:25:52.651925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:40.433 08:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.433 08:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:40.433 [2024-12-13 08:25:52.653857] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.371 "name": "raid_bdev1", 00:14:41.371 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:41.371 "strip_size_kb": 0, 00:14:41.371 "state": "online", 00:14:41.371 "raid_level": "raid1", 00:14:41.371 "superblock": true, 00:14:41.371 "num_base_bdevs": 4, 00:14:41.371 "num_base_bdevs_discovered": 3, 00:14:41.371 "num_base_bdevs_operational": 3, 00:14:41.371 "process": { 00:14:41.371 "type": "rebuild", 00:14:41.371 "target": "spare", 00:14:41.371 "progress": { 00:14:41.371 "blocks": 20480, 00:14:41.371 "percent": 32 00:14:41.371 } 00:14:41.371 }, 00:14:41.371 "base_bdevs_list": [ 00:14:41.371 { 00:14:41.371 "name": "spare", 00:14:41.371 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:41.371 "is_configured": true, 00:14:41.371 "data_offset": 2048, 00:14:41.371 "data_size": 63488 00:14:41.371 }, 00:14:41.371 { 00:14:41.371 "name": null, 00:14:41.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.371 "is_configured": false, 00:14:41.371 "data_offset": 2048, 00:14:41.371 "data_size": 63488 00:14:41.371 }, 00:14:41.371 { 00:14:41.371 "name": "BaseBdev3", 00:14:41.371 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:41.371 "is_configured": true, 00:14:41.371 "data_offset": 2048, 00:14:41.371 "data_size": 63488 00:14:41.371 }, 00:14:41.371 { 00:14:41.371 "name": "BaseBdev4", 00:14:41.371 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:41.371 "is_configured": true, 00:14:41.371 "data_offset": 2048, 00:14:41.371 "data_size": 63488 00:14:41.371 } 00:14:41.371 ] 00:14:41.371 }' 00:14:41.371 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.631 [2024-12-13 08:25:53.813051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.631 [2024-12-13 08:25:53.859285] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.631 [2024-12-13 08:25:53.859364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.631 [2024-12-13 08:25:53.859383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.631 [2024-12-13 08:25:53.859390] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.631 "name": "raid_bdev1", 00:14:41.631 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:41.631 "strip_size_kb": 0, 00:14:41.631 "state": "online", 00:14:41.631 "raid_level": "raid1", 00:14:41.631 "superblock": true, 00:14:41.631 "num_base_bdevs": 4, 00:14:41.631 "num_base_bdevs_discovered": 2, 00:14:41.631 "num_base_bdevs_operational": 2, 00:14:41.631 "base_bdevs_list": [ 00:14:41.631 { 00:14:41.631 "name": null, 00:14:41.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.631 "is_configured": false, 00:14:41.631 "data_offset": 0, 00:14:41.631 "data_size": 63488 00:14:41.631 }, 00:14:41.631 { 00:14:41.631 "name": null, 00:14:41.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.631 "is_configured": false, 00:14:41.631 "data_offset": 2048, 00:14:41.631 "data_size": 63488 00:14:41.631 }, 00:14:41.631 { 00:14:41.631 "name": "BaseBdev3", 00:14:41.631 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:41.631 "is_configured": true, 00:14:41.631 "data_offset": 2048, 00:14:41.631 "data_size": 63488 00:14:41.631 }, 00:14:41.631 { 00:14:41.631 "name": "BaseBdev4", 00:14:41.631 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:41.631 "is_configured": true, 00:14:41.631 "data_offset": 2048, 00:14:41.631 "data_size": 63488 00:14:41.631 } 00:14:41.631 ] 00:14:41.631 }' 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.631 08:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.201 08:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.201 08:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.201 08:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.201 [2024-12-13 08:25:54.304727] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.201 [2024-12-13 08:25:54.304883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.201 [2024-12-13 08:25:54.304923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:42.201 [2024-12-13 08:25:54.304935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.201 [2024-12-13 08:25:54.305487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.201 [2024-12-13 08:25:54.305515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.201 [2024-12-13 08:25:54.305620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:42.201 [2024-12-13 08:25:54.305640] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:42.201 [2024-12-13 08:25:54.305656] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:42.201 [2024-12-13 08:25:54.305683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.201 [2024-12-13 08:25:54.320983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:42.201 spare 00:14:42.201 08:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.201 08:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:42.201 [2024-12-13 08:25:54.323001] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.140 "name": "raid_bdev1", 00:14:43.140 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:43.140 "strip_size_kb": 0, 00:14:43.140 "state": "online", 00:14:43.140 "raid_level": "raid1", 00:14:43.140 "superblock": true, 00:14:43.140 "num_base_bdevs": 4, 00:14:43.140 "num_base_bdevs_discovered": 3, 00:14:43.140 "num_base_bdevs_operational": 3, 00:14:43.140 "process": { 00:14:43.140 "type": "rebuild", 00:14:43.140 "target": "spare", 00:14:43.140 "progress": { 00:14:43.140 "blocks": 20480, 00:14:43.140 "percent": 32 00:14:43.140 } 00:14:43.140 }, 00:14:43.140 "base_bdevs_list": [ 00:14:43.140 { 00:14:43.140 "name": "spare", 00:14:43.140 "uuid": "01802cb7-caa0-51c3-96bf-491b2a499331", 00:14:43.140 "is_configured": true, 00:14:43.140 "data_offset": 2048, 00:14:43.140 "data_size": 63488 00:14:43.140 }, 00:14:43.140 { 00:14:43.140 "name": null, 00:14:43.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.140 "is_configured": false, 00:14:43.140 "data_offset": 2048, 00:14:43.140 "data_size": 63488 00:14:43.140 }, 00:14:43.140 { 00:14:43.140 "name": "BaseBdev3", 00:14:43.140 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:43.140 "is_configured": true, 00:14:43.140 "data_offset": 2048, 00:14:43.140 "data_size": 63488 00:14:43.140 }, 00:14:43.140 { 00:14:43.140 "name": "BaseBdev4", 00:14:43.140 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:43.140 "is_configured": true, 00:14:43.140 "data_offset": 2048, 00:14:43.140 "data_size": 63488 00:14:43.140 } 00:14:43.140 ] 00:14:43.140 }' 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.140 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.140 [2024-12-13 08:25:55.482506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.400 [2024-12-13 08:25:55.528684] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.400 [2024-12-13 08:25:55.528816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.400 [2024-12-13 08:25:55.528854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.400 [2024-12-13 08:25:55.528877] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.400 "name": "raid_bdev1", 00:14:43.400 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:43.400 "strip_size_kb": 0, 00:14:43.400 "state": "online", 00:14:43.400 "raid_level": "raid1", 00:14:43.400 "superblock": true, 00:14:43.400 "num_base_bdevs": 4, 00:14:43.400 "num_base_bdevs_discovered": 2, 00:14:43.400 "num_base_bdevs_operational": 2, 00:14:43.400 "base_bdevs_list": [ 00:14:43.400 { 00:14:43.400 "name": null, 00:14:43.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.400 "is_configured": false, 00:14:43.400 "data_offset": 0, 00:14:43.400 "data_size": 63488 00:14:43.400 }, 00:14:43.400 { 00:14:43.400 "name": null, 00:14:43.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.400 "is_configured": false, 00:14:43.400 "data_offset": 2048, 00:14:43.400 "data_size": 63488 00:14:43.400 }, 00:14:43.400 { 00:14:43.400 "name": "BaseBdev3", 00:14:43.400 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:43.400 "is_configured": true, 00:14:43.400 "data_offset": 2048, 00:14:43.400 "data_size": 63488 00:14:43.400 }, 00:14:43.400 { 00:14:43.400 "name": "BaseBdev4", 00:14:43.400 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:43.400 "is_configured": true, 00:14:43.400 "data_offset": 2048, 00:14:43.400 "data_size": 63488 00:14:43.400 } 00:14:43.400 ] 00:14:43.400 }' 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.400 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.660 08:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 08:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.920 "name": "raid_bdev1", 00:14:43.920 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:43.920 "strip_size_kb": 0, 00:14:43.920 "state": "online", 00:14:43.920 "raid_level": "raid1", 00:14:43.920 "superblock": true, 00:14:43.920 "num_base_bdevs": 4, 00:14:43.920 "num_base_bdevs_discovered": 2, 00:14:43.920 "num_base_bdevs_operational": 2, 00:14:43.920 "base_bdevs_list": [ 00:14:43.920 { 00:14:43.920 "name": null, 00:14:43.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.920 "is_configured": false, 00:14:43.920 "data_offset": 0, 00:14:43.920 "data_size": 63488 00:14:43.920 }, 00:14:43.920 { 00:14:43.920 "name": null, 00:14:43.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.920 "is_configured": false, 00:14:43.920 "data_offset": 2048, 00:14:43.920 "data_size": 63488 00:14:43.920 }, 00:14:43.920 { 00:14:43.920 "name": "BaseBdev3", 00:14:43.920 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:43.920 "is_configured": true, 00:14:43.920 "data_offset": 2048, 00:14:43.920 "data_size": 63488 00:14:43.920 }, 00:14:43.920 { 00:14:43.920 "name": "BaseBdev4", 00:14:43.920 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:43.920 "is_configured": true, 00:14:43.920 "data_offset": 2048, 00:14:43.920 "data_size": 63488 00:14:43.920 } 00:14:43.920 ] 00:14:43.920 }' 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.920 08:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.921 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:43.921 08:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.921 08:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.921 [2024-12-13 08:25:56.118148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:43.921 [2024-12-13 08:25:56.118270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.921 [2024-12-13 08:25:56.118299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:43.921 [2024-12-13 08:25:56.118311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.921 [2024-12-13 08:25:56.118788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.921 [2024-12-13 08:25:56.118808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.921 [2024-12-13 08:25:56.118892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:43.921 [2024-12-13 08:25:56.118909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:43.921 [2024-12-13 08:25:56.118917] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:43.921 [2024-12-13 08:25:56.118940] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:43.921 BaseBdev1 00:14:43.921 08:25:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.921 08:25:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.860 "name": "raid_bdev1", 00:14:44.860 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:44.860 "strip_size_kb": 0, 00:14:44.860 "state": "online", 00:14:44.860 "raid_level": "raid1", 00:14:44.860 "superblock": true, 00:14:44.860 "num_base_bdevs": 4, 00:14:44.860 "num_base_bdevs_discovered": 2, 00:14:44.860 "num_base_bdevs_operational": 2, 00:14:44.860 "base_bdevs_list": [ 00:14:44.860 { 00:14:44.860 "name": null, 00:14:44.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.860 "is_configured": false, 00:14:44.860 "data_offset": 0, 00:14:44.860 "data_size": 63488 00:14:44.860 }, 00:14:44.860 { 00:14:44.860 "name": null, 00:14:44.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.860 "is_configured": false, 00:14:44.860 "data_offset": 2048, 00:14:44.860 "data_size": 63488 00:14:44.860 }, 00:14:44.860 { 00:14:44.860 "name": "BaseBdev3", 00:14:44.860 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:44.860 "is_configured": true, 00:14:44.860 "data_offset": 2048, 00:14:44.860 "data_size": 63488 00:14:44.860 }, 00:14:44.860 { 00:14:44.860 "name": "BaseBdev4", 00:14:44.860 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:44.860 "is_configured": true, 00:14:44.860 "data_offset": 2048, 00:14:44.860 "data_size": 63488 00:14:44.860 } 00:14:44.860 ] 00:14:44.860 }' 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.860 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.432 "name": "raid_bdev1", 00:14:45.432 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:45.432 "strip_size_kb": 0, 00:14:45.432 "state": "online", 00:14:45.432 "raid_level": "raid1", 00:14:45.432 "superblock": true, 00:14:45.432 "num_base_bdevs": 4, 00:14:45.432 "num_base_bdevs_discovered": 2, 00:14:45.432 "num_base_bdevs_operational": 2, 00:14:45.432 "base_bdevs_list": [ 00:14:45.432 { 00:14:45.432 "name": null, 00:14:45.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.432 "is_configured": false, 00:14:45.432 "data_offset": 0, 00:14:45.432 "data_size": 63488 00:14:45.432 }, 00:14:45.432 { 00:14:45.432 "name": null, 00:14:45.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.432 "is_configured": false, 00:14:45.432 "data_offset": 2048, 00:14:45.432 "data_size": 63488 00:14:45.432 }, 00:14:45.432 { 00:14:45.432 "name": "BaseBdev3", 00:14:45.432 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:45.432 "is_configured": true, 00:14:45.432 "data_offset": 2048, 00:14:45.432 "data_size": 63488 00:14:45.432 }, 00:14:45.432 { 00:14:45.432 "name": "BaseBdev4", 00:14:45.432 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:45.432 "is_configured": true, 00:14:45.432 "data_offset": 2048, 00:14:45.432 "data_size": 63488 00:14:45.432 } 00:14:45.432 ] 00:14:45.432 }' 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.432 [2024-12-13 08:25:57.695450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.432 [2024-12-13 08:25:57.695656] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:45.432 [2024-12-13 08:25:57.695674] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:45.432 request: 00:14:45.432 { 00:14:45.432 "base_bdev": "BaseBdev1", 00:14:45.432 "raid_bdev": "raid_bdev1", 00:14:45.432 "method": "bdev_raid_add_base_bdev", 00:14:45.432 "req_id": 1 00:14:45.432 } 00:14:45.432 Got JSON-RPC error response 00:14:45.432 response: 00:14:45.432 { 00:14:45.432 "code": -22, 00:14:45.432 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:45.432 } 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.432 08:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.371 08:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.632 08:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.632 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.632 "name": "raid_bdev1", 00:14:46.632 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:46.632 "strip_size_kb": 0, 00:14:46.632 "state": "online", 00:14:46.632 "raid_level": "raid1", 00:14:46.632 "superblock": true, 00:14:46.632 "num_base_bdevs": 4, 00:14:46.632 "num_base_bdevs_discovered": 2, 00:14:46.632 "num_base_bdevs_operational": 2, 00:14:46.632 "base_bdevs_list": [ 00:14:46.632 { 00:14:46.632 "name": null, 00:14:46.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.632 "is_configured": false, 00:14:46.632 "data_offset": 0, 00:14:46.632 "data_size": 63488 00:14:46.632 }, 00:14:46.632 { 00:14:46.632 "name": null, 00:14:46.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.632 "is_configured": false, 00:14:46.632 "data_offset": 2048, 00:14:46.632 "data_size": 63488 00:14:46.632 }, 00:14:46.632 { 00:14:46.632 "name": "BaseBdev3", 00:14:46.632 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:46.632 "is_configured": true, 00:14:46.632 "data_offset": 2048, 00:14:46.632 "data_size": 63488 00:14:46.632 }, 00:14:46.632 { 00:14:46.632 "name": "BaseBdev4", 00:14:46.632 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:46.632 "is_configured": true, 00:14:46.632 "data_offset": 2048, 00:14:46.632 "data_size": 63488 00:14:46.632 } 00:14:46.632 ] 00:14:46.632 }' 00:14:46.632 08:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.632 08:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.891 "name": "raid_bdev1", 00:14:46.891 "uuid": "565ddc88-482a-47f9-a5f1-063521f9a5c9", 00:14:46.891 "strip_size_kb": 0, 00:14:46.891 "state": "online", 00:14:46.891 "raid_level": "raid1", 00:14:46.891 "superblock": true, 00:14:46.891 "num_base_bdevs": 4, 00:14:46.891 "num_base_bdevs_discovered": 2, 00:14:46.891 "num_base_bdevs_operational": 2, 00:14:46.891 "base_bdevs_list": [ 00:14:46.891 { 00:14:46.891 "name": null, 00:14:46.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.891 "is_configured": false, 00:14:46.891 "data_offset": 0, 00:14:46.891 "data_size": 63488 00:14:46.891 }, 00:14:46.891 { 00:14:46.891 "name": null, 00:14:46.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.891 "is_configured": false, 00:14:46.891 "data_offset": 2048, 00:14:46.891 "data_size": 63488 00:14:46.891 }, 00:14:46.891 { 00:14:46.891 "name": "BaseBdev3", 00:14:46.891 "uuid": "e75d5fff-02a1-526c-8b0e-b7f54e1b54fb", 00:14:46.891 "is_configured": true, 00:14:46.891 "data_offset": 2048, 00:14:46.891 "data_size": 63488 00:14:46.891 }, 00:14:46.891 { 00:14:46.891 "name": "BaseBdev4", 00:14:46.891 "uuid": "9c555dcb-9e45-5479-9668-f411f61cfe23", 00:14:46.891 "is_configured": true, 00:14:46.891 "data_offset": 2048, 00:14:46.891 "data_size": 63488 00:14:46.891 } 00:14:46.891 ] 00:14:46.891 }' 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.891 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78158 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78158 ']' 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78158 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78158 00:14:47.152 killing process with pid 78158 00:14:47.152 Received shutdown signal, test time was about 60.000000 seconds 00:14:47.152 00:14:47.152 Latency(us) 00:14:47.152 [2024-12-13T08:25:59.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.152 [2024-12-13T08:25:59.517Z] =================================================================================================================== 00:14:47.152 [2024-12-13T08:25:59.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78158' 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78158 00:14:47.152 [2024-12-13 08:25:59.315844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.152 [2024-12-13 08:25:59.315972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.152 08:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78158 00:14:47.152 [2024-12-13 08:25:59.316050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.152 [2024-12-13 08:25:59.316062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:47.721 [2024-12-13 08:25:59.826308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.660 08:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:48.660 00:14:48.660 real 0m25.309s 00:14:48.660 user 0m30.588s 00:14:48.660 sys 0m3.754s 00:14:48.660 08:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.660 ************************************ 00:14:48.660 END TEST raid_rebuild_test_sb 00:14:48.660 ************************************ 00:14:48.660 08:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 08:26:01 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:48.660 08:26:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:48.660 08:26:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.660 08:26:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 ************************************ 00:14:48.660 START TEST raid_rebuild_test_io 00:14:48.660 ************************************ 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.660 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:48.919 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.919 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.919 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:48.919 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.919 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78919 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78919 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78919 ']' 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.920 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.920 [2024-12-13 08:26:01.115022] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:14:48.920 [2024-12-13 08:26:01.115264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:48.920 Zero copy mechanism will not be used. 00:14:48.920 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78919 ] 00:14:49.179 [2024-12-13 08:26:01.289636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.179 [2024-12-13 08:26:01.411191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.437 [2024-12-13 08:26:01.617464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.437 [2024-12-13 08:26:01.617592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.697 BaseBdev1_malloc 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.697 08:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.697 [2024-12-13 08:26:02.005823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:49.697 [2024-12-13 08:26:02.005897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.697 [2024-12-13 08:26:02.005923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:49.697 [2024-12-13 08:26:02.005937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.697 [2024-12-13 08:26:02.008348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.697 [2024-12-13 08:26:02.008457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.697 BaseBdev1 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.697 BaseBdev2_malloc 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.697 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 [2024-12-13 08:26:02.062504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:49.958 [2024-12-13 08:26:02.062574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.958 [2024-12-13 08:26:02.062595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:49.958 [2024-12-13 08:26:02.062608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.958 [2024-12-13 08:26:02.064947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.958 [2024-12-13 08:26:02.064989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:49.958 BaseBdev2 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 BaseBdev3_malloc 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 [2024-12-13 08:26:02.126051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:49.958 [2024-12-13 08:26:02.126225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.958 [2024-12-13 08:26:02.126261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:49.958 [2024-12-13 08:26:02.126275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.958 [2024-12-13 08:26:02.128718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.958 [2024-12-13 08:26:02.128762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:49.958 BaseBdev3 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 BaseBdev4_malloc 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 [2024-12-13 08:26:02.182082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:49.958 [2024-12-13 08:26:02.182245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.958 [2024-12-13 08:26:02.182274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:49.958 [2024-12-13 08:26:02.182286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.958 [2024-12-13 08:26:02.184540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.958 [2024-12-13 08:26:02.184586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:49.958 BaseBdev4 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 spare_malloc 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 spare_delay 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 [2024-12-13 08:26:02.250285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.958 [2024-12-13 08:26:02.250420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.958 [2024-12-13 08:26:02.250447] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:49.958 [2024-12-13 08:26:02.250460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.958 [2024-12-13 08:26:02.252764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.958 [2024-12-13 08:26:02.252808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.958 spare 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 [2024-12-13 08:26:02.262307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.958 [2024-12-13 08:26:02.264107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.958 [2024-12-13 08:26:02.264187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.958 [2024-12-13 08:26:02.264241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.958 [2024-12-13 08:26:02.264340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:49.958 [2024-12-13 08:26:02.264355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:49.958 [2024-12-13 08:26:02.264617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:49.958 [2024-12-13 08:26:02.264800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:49.958 [2024-12-13 08:26:02.264812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:49.958 [2024-12-13 08:26:02.264972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.958 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.958 "name": "raid_bdev1", 00:14:49.958 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:49.958 "strip_size_kb": 0, 00:14:49.958 "state": "online", 00:14:49.958 "raid_level": "raid1", 00:14:49.958 "superblock": false, 00:14:49.958 "num_base_bdevs": 4, 00:14:49.958 "num_base_bdevs_discovered": 4, 00:14:49.958 "num_base_bdevs_operational": 4, 00:14:49.958 "base_bdevs_list": [ 00:14:49.958 { 00:14:49.958 "name": "BaseBdev1", 00:14:49.958 "uuid": "d5e62f13-0b57-5319-b71f-b192ea0b0fab", 00:14:49.958 "is_configured": true, 00:14:49.958 "data_offset": 0, 00:14:49.958 "data_size": 65536 00:14:49.958 }, 00:14:49.958 { 00:14:49.958 "name": "BaseBdev2", 00:14:49.958 "uuid": "e2efb95c-24e1-56c0-9306-9dce10c62880", 00:14:49.958 "is_configured": true, 00:14:49.958 "data_offset": 0, 00:14:49.958 "data_size": 65536 00:14:49.958 }, 00:14:49.959 { 00:14:49.959 "name": "BaseBdev3", 00:14:49.959 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:49.959 "is_configured": true, 00:14:49.959 "data_offset": 0, 00:14:49.959 "data_size": 65536 00:14:49.959 }, 00:14:49.959 { 00:14:49.959 "name": "BaseBdev4", 00:14:49.959 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:49.959 "is_configured": true, 00:14:49.959 "data_offset": 0, 00:14:49.959 "data_size": 65536 00:14:49.959 } 00:14:49.959 ] 00:14:49.959 }' 00:14:49.959 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.959 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:50.526 [2024-12-13 08:26:02.721881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.526 [2024-12-13 08:26:02.821357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.526 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.527 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.527 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.527 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.527 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.527 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.527 "name": "raid_bdev1", 00:14:50.527 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:50.527 "strip_size_kb": 0, 00:14:50.527 "state": "online", 00:14:50.527 "raid_level": "raid1", 00:14:50.527 "superblock": false, 00:14:50.527 "num_base_bdevs": 4, 00:14:50.527 "num_base_bdevs_discovered": 3, 00:14:50.527 "num_base_bdevs_operational": 3, 00:14:50.527 "base_bdevs_list": [ 00:14:50.527 { 00:14:50.527 "name": null, 00:14:50.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.527 "is_configured": false, 00:14:50.527 "data_offset": 0, 00:14:50.527 "data_size": 65536 00:14:50.527 }, 00:14:50.527 { 00:14:50.527 "name": "BaseBdev2", 00:14:50.527 "uuid": "e2efb95c-24e1-56c0-9306-9dce10c62880", 00:14:50.527 "is_configured": true, 00:14:50.527 "data_offset": 0, 00:14:50.527 "data_size": 65536 00:14:50.527 }, 00:14:50.527 { 00:14:50.527 "name": "BaseBdev3", 00:14:50.527 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:50.527 "is_configured": true, 00:14:50.527 "data_offset": 0, 00:14:50.527 "data_size": 65536 00:14:50.527 }, 00:14:50.527 { 00:14:50.527 "name": "BaseBdev4", 00:14:50.527 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:50.527 "is_configured": true, 00:14:50.527 "data_offset": 0, 00:14:50.527 "data_size": 65536 00:14:50.527 } 00:14:50.527 ] 00:14:50.527 }' 00:14:50.527 08:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.527 08:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.786 [2024-12-13 08:26:02.909006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:50.786 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.786 Zero copy mechanism will not be used. 00:14:50.786 Running I/O for 60 seconds... 00:14:51.045 08:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.046 08:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.046 08:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.046 [2024-12-13 08:26:03.255716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.046 08:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.046 08:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:51.046 [2024-12-13 08:26:03.329266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:51.046 [2024-12-13 08:26:03.331442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.305 [2024-12-13 08:26:03.447725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:51.305 [2024-12-13 08:26:03.449299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:51.305 [2024-12-13 08:26:03.657090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:51.305 [2024-12-13 08:26:03.657997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:51.823 161.00 IOPS, 483.00 MiB/s [2024-12-13T08:26:04.188Z] [2024-12-13 08:26:04.035962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:51.823 [2024-12-13 08:26:04.146529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:51.823 [2024-12-13 08:26:04.146989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.083 "name": "raid_bdev1", 00:14:52.083 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:52.083 "strip_size_kb": 0, 00:14:52.083 "state": "online", 00:14:52.083 "raid_level": "raid1", 00:14:52.083 "superblock": false, 00:14:52.083 "num_base_bdevs": 4, 00:14:52.083 "num_base_bdevs_discovered": 4, 00:14:52.083 "num_base_bdevs_operational": 4, 00:14:52.083 "process": { 00:14:52.083 "type": "rebuild", 00:14:52.083 "target": "spare", 00:14:52.083 "progress": { 00:14:52.083 "blocks": 12288, 00:14:52.083 "percent": 18 00:14:52.083 } 00:14:52.083 }, 00:14:52.083 "base_bdevs_list": [ 00:14:52.083 { 00:14:52.083 "name": "spare", 00:14:52.083 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:52.083 "is_configured": true, 00:14:52.083 "data_offset": 0, 00:14:52.083 "data_size": 65536 00:14:52.083 }, 00:14:52.083 { 00:14:52.083 "name": "BaseBdev2", 00:14:52.083 "uuid": "e2efb95c-24e1-56c0-9306-9dce10c62880", 00:14:52.083 "is_configured": true, 00:14:52.083 "data_offset": 0, 00:14:52.083 "data_size": 65536 00:14:52.083 }, 00:14:52.083 { 00:14:52.083 "name": "BaseBdev3", 00:14:52.083 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:52.083 "is_configured": true, 00:14:52.083 "data_offset": 0, 00:14:52.083 "data_size": 65536 00:14:52.083 }, 00:14:52.083 { 00:14:52.083 "name": "BaseBdev4", 00:14:52.083 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:52.083 "is_configured": true, 00:14:52.083 "data_offset": 0, 00:14:52.083 "data_size": 65536 00:14:52.083 } 00:14:52.083 ] 00:14:52.083 }' 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.083 [2024-12-13 08:26:04.382152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:52.083 [2024-12-13 08:26:04.383576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.083 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.343 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.343 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:52.343 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.343 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.343 [2024-12-13 08:26:04.471178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.343 [2024-12-13 08:26:04.492564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:52.343 [2024-12-13 08:26:04.594867] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:52.344 [2024-12-13 08:26:04.606118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.344 [2024-12-13 08:26:04.606171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.344 [2024-12-13 08:26:04.606189] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:52.344 [2024-12-13 08:26:04.642469] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.344 "name": "raid_bdev1", 00:14:52.344 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:52.344 "strip_size_kb": 0, 00:14:52.344 "state": "online", 00:14:52.344 "raid_level": "raid1", 00:14:52.344 "superblock": false, 00:14:52.344 "num_base_bdevs": 4, 00:14:52.344 "num_base_bdevs_discovered": 3, 00:14:52.344 "num_base_bdevs_operational": 3, 00:14:52.344 "base_bdevs_list": [ 00:14:52.344 { 00:14:52.344 "name": null, 00:14:52.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.344 "is_configured": false, 00:14:52.344 "data_offset": 0, 00:14:52.344 "data_size": 65536 00:14:52.344 }, 00:14:52.344 { 00:14:52.344 "name": "BaseBdev2", 00:14:52.344 "uuid": "e2efb95c-24e1-56c0-9306-9dce10c62880", 00:14:52.344 "is_configured": true, 00:14:52.344 "data_offset": 0, 00:14:52.344 "data_size": 65536 00:14:52.344 }, 00:14:52.344 { 00:14:52.344 "name": "BaseBdev3", 00:14:52.344 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:52.344 "is_configured": true, 00:14:52.344 "data_offset": 0, 00:14:52.344 "data_size": 65536 00:14:52.344 }, 00:14:52.344 { 00:14:52.344 "name": "BaseBdev4", 00:14:52.344 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:52.344 "is_configured": true, 00:14:52.344 "data_offset": 0, 00:14:52.344 "data_size": 65536 00:14:52.344 } 00:14:52.344 ] 00:14:52.344 }' 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.344 08:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.862 145.50 IOPS, 436.50 MiB/s [2024-12-13T08:26:05.227Z] 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.862 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.863 "name": "raid_bdev1", 00:14:52.863 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:52.863 "strip_size_kb": 0, 00:14:52.863 "state": "online", 00:14:52.863 "raid_level": "raid1", 00:14:52.863 "superblock": false, 00:14:52.863 "num_base_bdevs": 4, 00:14:52.863 "num_base_bdevs_discovered": 3, 00:14:52.863 "num_base_bdevs_operational": 3, 00:14:52.863 "base_bdevs_list": [ 00:14:52.863 { 00:14:52.863 "name": null, 00:14:52.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.863 "is_configured": false, 00:14:52.863 "data_offset": 0, 00:14:52.863 "data_size": 65536 00:14:52.863 }, 00:14:52.863 { 00:14:52.863 "name": "BaseBdev2", 00:14:52.863 "uuid": "e2efb95c-24e1-56c0-9306-9dce10c62880", 00:14:52.863 "is_configured": true, 00:14:52.863 "data_offset": 0, 00:14:52.863 "data_size": 65536 00:14:52.863 }, 00:14:52.863 { 00:14:52.863 "name": "BaseBdev3", 00:14:52.863 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:52.863 "is_configured": true, 00:14:52.863 "data_offset": 0, 00:14:52.863 "data_size": 65536 00:14:52.863 }, 00:14:52.863 { 00:14:52.863 "name": "BaseBdev4", 00:14:52.863 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:52.863 "is_configured": true, 00:14:52.863 "data_offset": 0, 00:14:52.863 "data_size": 65536 00:14:52.863 } 00:14:52.863 ] 00:14:52.863 }' 00:14:52.863 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.863 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.863 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.863 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.863 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.863 08:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.863 08:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.863 [2024-12-13 08:26:05.201196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.122 08:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.122 08:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:53.122 [2024-12-13 08:26:05.248808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:53.122 [2024-12-13 08:26:05.250791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.122 [2024-12-13 08:26:05.379840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:53.122 [2024-12-13 08:26:05.380565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:53.381 [2024-12-13 08:26:05.599763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.381 [2024-12-13 08:26:05.600256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.641 [2024-12-13 08:26:05.844121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:53.900 143.67 IOPS, 431.00 MiB/s [2024-12-13T08:26:06.265Z] [2024-12-13 08:26:06.063060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.900 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.159 "name": "raid_bdev1", 00:14:54.159 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:54.159 "strip_size_kb": 0, 00:14:54.159 "state": "online", 00:14:54.159 "raid_level": "raid1", 00:14:54.159 "superblock": false, 00:14:54.159 "num_base_bdevs": 4, 00:14:54.159 "num_base_bdevs_discovered": 4, 00:14:54.159 "num_base_bdevs_operational": 4, 00:14:54.159 "process": { 00:14:54.159 "type": "rebuild", 00:14:54.159 "target": "spare", 00:14:54.159 "progress": { 00:14:54.159 "blocks": 10240, 00:14:54.159 "percent": 15 00:14:54.159 } 00:14:54.159 }, 00:14:54.159 "base_bdevs_list": [ 00:14:54.159 { 00:14:54.159 "name": "spare", 00:14:54.159 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:54.159 "is_configured": true, 00:14:54.159 "data_offset": 0, 00:14:54.159 "data_size": 65536 00:14:54.159 }, 00:14:54.159 { 00:14:54.159 "name": "BaseBdev2", 00:14:54.159 "uuid": "e2efb95c-24e1-56c0-9306-9dce10c62880", 00:14:54.159 "is_configured": true, 00:14:54.159 "data_offset": 0, 00:14:54.159 "data_size": 65536 00:14:54.159 }, 00:14:54.159 { 00:14:54.159 "name": "BaseBdev3", 00:14:54.159 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:54.159 "is_configured": true, 00:14:54.159 "data_offset": 0, 00:14:54.159 "data_size": 65536 00:14:54.159 }, 00:14:54.159 { 00:14:54.159 "name": "BaseBdev4", 00:14:54.159 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:54.159 "is_configured": true, 00:14:54.159 "data_offset": 0, 00:14:54.159 "data_size": 65536 00:14:54.159 } 00:14:54.159 ] 00:14:54.159 }' 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.159 [2024-12-13 08:26:06.391439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:54.159 [2024-12-13 08:26:06.392101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.159 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.159 [2024-12-13 08:26:06.402971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.417 [2024-12-13 08:26:06.632631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:54.417 [2024-12-13 08:26:06.633086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:54.417 [2024-12-13 08:26:06.736224] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:54.417 [2024-12-13 08:26:06.736354] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:54.418 [2024-12-13 08:26:06.738081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.418 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.676 "name": "raid_bdev1", 00:14:54.676 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:54.676 "strip_size_kb": 0, 00:14:54.676 "state": "online", 00:14:54.676 "raid_level": "raid1", 00:14:54.676 "superblock": false, 00:14:54.676 "num_base_bdevs": 4, 00:14:54.676 "num_base_bdevs_discovered": 3, 00:14:54.676 "num_base_bdevs_operational": 3, 00:14:54.676 "process": { 00:14:54.676 "type": "rebuild", 00:14:54.676 "target": "spare", 00:14:54.676 "progress": { 00:14:54.676 "blocks": 16384, 00:14:54.676 "percent": 25 00:14:54.676 } 00:14:54.676 }, 00:14:54.676 "base_bdevs_list": [ 00:14:54.676 { 00:14:54.676 "name": "spare", 00:14:54.676 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:54.676 "is_configured": true, 00:14:54.676 "data_offset": 0, 00:14:54.676 "data_size": 65536 00:14:54.676 }, 00:14:54.676 { 00:14:54.676 "name": null, 00:14:54.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.676 "is_configured": false, 00:14:54.676 "data_offset": 0, 00:14:54.676 "data_size": 65536 00:14:54.676 }, 00:14:54.676 { 00:14:54.676 "name": "BaseBdev3", 00:14:54.676 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:54.676 "is_configured": true, 00:14:54.676 "data_offset": 0, 00:14:54.676 "data_size": 65536 00:14:54.676 }, 00:14:54.676 { 00:14:54.676 "name": "BaseBdev4", 00:14:54.676 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:54.676 "is_configured": true, 00:14:54.676 "data_offset": 0, 00:14:54.676 "data_size": 65536 00:14:54.676 } 00:14:54.676 ] 00:14:54.676 }' 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=487 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.676 121.75 IOPS, 365.25 MiB/s [2024-12-13T08:26:07.041Z] 08:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.676 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.676 "name": "raid_bdev1", 00:14:54.677 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:54.677 "strip_size_kb": 0, 00:14:54.677 "state": "online", 00:14:54.677 "raid_level": "raid1", 00:14:54.677 "superblock": false, 00:14:54.677 "num_base_bdevs": 4, 00:14:54.677 "num_base_bdevs_discovered": 3, 00:14:54.677 "num_base_bdevs_operational": 3, 00:14:54.677 "process": { 00:14:54.677 "type": "rebuild", 00:14:54.677 "target": "spare", 00:14:54.677 "progress": { 00:14:54.677 "blocks": 18432, 00:14:54.677 "percent": 28 00:14:54.677 } 00:14:54.677 }, 00:14:54.677 "base_bdevs_list": [ 00:14:54.677 { 00:14:54.677 "name": "spare", 00:14:54.677 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:54.677 "is_configured": true, 00:14:54.677 "data_offset": 0, 00:14:54.677 "data_size": 65536 00:14:54.677 }, 00:14:54.677 { 00:14:54.677 "name": null, 00:14:54.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.677 "is_configured": false, 00:14:54.677 "data_offset": 0, 00:14:54.677 "data_size": 65536 00:14:54.677 }, 00:14:54.677 { 00:14:54.677 "name": "BaseBdev3", 00:14:54.677 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:54.677 "is_configured": true, 00:14:54.677 "data_offset": 0, 00:14:54.677 "data_size": 65536 00:14:54.677 }, 00:14:54.677 { 00:14:54.677 "name": "BaseBdev4", 00:14:54.677 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:54.677 "is_configured": true, 00:14:54.677 "data_offset": 0, 00:14:54.677 "data_size": 65536 00:14:54.677 } 00:14:54.677 ] 00:14:54.677 }' 00:14:54.677 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.677 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.677 08:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.677 08:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.677 08:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.245 [2024-12-13 08:26:07.371468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:55.245 [2024-12-13 08:26:07.479190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:55.814 112.00 IOPS, 336.00 MiB/s [2024-12-13T08:26:08.179Z] [2024-12-13 08:26:07.929592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.814 "name": "raid_bdev1", 00:14:55.814 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:55.814 "strip_size_kb": 0, 00:14:55.814 "state": "online", 00:14:55.814 "raid_level": "raid1", 00:14:55.814 "superblock": false, 00:14:55.814 "num_base_bdevs": 4, 00:14:55.814 "num_base_bdevs_discovered": 3, 00:14:55.814 "num_base_bdevs_operational": 3, 00:14:55.814 "process": { 00:14:55.814 "type": "rebuild", 00:14:55.814 "target": "spare", 00:14:55.814 "progress": { 00:14:55.814 "blocks": 34816, 00:14:55.814 "percent": 53 00:14:55.814 } 00:14:55.814 }, 00:14:55.814 "base_bdevs_list": [ 00:14:55.814 { 00:14:55.814 "name": "spare", 00:14:55.814 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:55.814 "is_configured": true, 00:14:55.814 "data_offset": 0, 00:14:55.814 "data_size": 65536 00:14:55.814 }, 00:14:55.814 { 00:14:55.814 "name": null, 00:14:55.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.814 "is_configured": false, 00:14:55.814 "data_offset": 0, 00:14:55.814 "data_size": 65536 00:14:55.814 }, 00:14:55.814 { 00:14:55.814 "name": "BaseBdev3", 00:14:55.814 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:55.814 "is_configured": true, 00:14:55.814 "data_offset": 0, 00:14:55.814 "data_size": 65536 00:14:55.814 }, 00:14:55.814 { 00:14:55.814 "name": "BaseBdev4", 00:14:55.814 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:55.814 "is_configured": true, 00:14:55.814 "data_offset": 0, 00:14:55.814 "data_size": 65536 00:14:55.814 } 00:14:55.814 ] 00:14:55.814 }' 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.814 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.074 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.074 08:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.074 [2024-12-13 08:26:08.255392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:56.074 [2024-12-13 08:26:08.371354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:56.334 [2024-12-13 08:26:08.600710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:56.853 100.33 IOPS, 301.00 MiB/s [2024-12-13T08:26:09.218Z] 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.853 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.853 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.853 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.853 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.854 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.854 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.854 08:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.854 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.854 08:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.854 08:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.113 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.113 "name": "raid_bdev1", 00:14:57.113 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:57.113 "strip_size_kb": 0, 00:14:57.113 "state": "online", 00:14:57.113 "raid_level": "raid1", 00:14:57.113 "superblock": false, 00:14:57.113 "num_base_bdevs": 4, 00:14:57.113 "num_base_bdevs_discovered": 3, 00:14:57.113 "num_base_bdevs_operational": 3, 00:14:57.113 "process": { 00:14:57.113 "type": "rebuild", 00:14:57.113 "target": "spare", 00:14:57.113 "progress": { 00:14:57.113 "blocks": 55296, 00:14:57.113 "percent": 84 00:14:57.113 } 00:14:57.113 }, 00:14:57.113 "base_bdevs_list": [ 00:14:57.113 { 00:14:57.113 "name": "spare", 00:14:57.113 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:57.113 "is_configured": true, 00:14:57.113 "data_offset": 0, 00:14:57.113 "data_size": 65536 00:14:57.113 }, 00:14:57.113 { 00:14:57.113 "name": null, 00:14:57.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.113 "is_configured": false, 00:14:57.113 "data_offset": 0, 00:14:57.113 "data_size": 65536 00:14:57.113 }, 00:14:57.113 { 00:14:57.113 "name": "BaseBdev3", 00:14:57.113 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:57.113 "is_configured": true, 00:14:57.113 "data_offset": 0, 00:14:57.113 "data_size": 65536 00:14:57.113 }, 00:14:57.113 { 00:14:57.113 "name": "BaseBdev4", 00:14:57.113 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:57.113 "is_configured": true, 00:14:57.113 "data_offset": 0, 00:14:57.113 "data_size": 65536 00:14:57.113 } 00:14:57.113 ] 00:14:57.113 }' 00:14:57.113 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.113 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.113 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.113 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.113 08:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.373 [2024-12-13 08:26:09.735798] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:57.633 [2024-12-13 08:26:09.835619] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:57.633 [2024-12-13 08:26:09.838628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.202 92.00 IOPS, 276.00 MiB/s [2024-12-13T08:26:10.567Z] 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.202 "name": "raid_bdev1", 00:14:58.202 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:58.202 "strip_size_kb": 0, 00:14:58.202 "state": "online", 00:14:58.202 "raid_level": "raid1", 00:14:58.202 "superblock": false, 00:14:58.202 "num_base_bdevs": 4, 00:14:58.202 "num_base_bdevs_discovered": 3, 00:14:58.202 "num_base_bdevs_operational": 3, 00:14:58.202 "base_bdevs_list": [ 00:14:58.202 { 00:14:58.202 "name": "spare", 00:14:58.202 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:58.202 "is_configured": true, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 }, 00:14:58.202 { 00:14:58.202 "name": null, 00:14:58.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.202 "is_configured": false, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 }, 00:14:58.202 { 00:14:58.202 "name": "BaseBdev3", 00:14:58.202 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:58.202 "is_configured": true, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 }, 00:14:58.202 { 00:14:58.202 "name": "BaseBdev4", 00:14:58.202 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:58.202 "is_configured": true, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 } 00:14:58.202 ] 00:14:58.202 }' 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.202 "name": "raid_bdev1", 00:14:58.202 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:58.202 "strip_size_kb": 0, 00:14:58.202 "state": "online", 00:14:58.202 "raid_level": "raid1", 00:14:58.202 "superblock": false, 00:14:58.202 "num_base_bdevs": 4, 00:14:58.202 "num_base_bdevs_discovered": 3, 00:14:58.202 "num_base_bdevs_operational": 3, 00:14:58.202 "base_bdevs_list": [ 00:14:58.202 { 00:14:58.202 "name": "spare", 00:14:58.202 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:58.202 "is_configured": true, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 }, 00:14:58.202 { 00:14:58.202 "name": null, 00:14:58.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.202 "is_configured": false, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 }, 00:14:58.202 { 00:14:58.202 "name": "BaseBdev3", 00:14:58.202 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:58.202 "is_configured": true, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 }, 00:14:58.202 { 00:14:58.202 "name": "BaseBdev4", 00:14:58.202 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:58.202 "is_configured": true, 00:14:58.202 "data_offset": 0, 00:14:58.202 "data_size": 65536 00:14:58.202 } 00:14:58.202 ] 00:14:58.202 }' 00:14:58.202 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.468 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.468 "name": "raid_bdev1", 00:14:58.468 "uuid": "ca97aae3-a6b4-44bc-a9a5-8dcd4ec44b70", 00:14:58.468 "strip_size_kb": 0, 00:14:58.468 "state": "online", 00:14:58.468 "raid_level": "raid1", 00:14:58.468 "superblock": false, 00:14:58.468 "num_base_bdevs": 4, 00:14:58.468 "num_base_bdevs_discovered": 3, 00:14:58.468 "num_base_bdevs_operational": 3, 00:14:58.468 "base_bdevs_list": [ 00:14:58.468 { 00:14:58.468 "name": "spare", 00:14:58.468 "uuid": "61b646b4-4ad4-5322-887e-3e3c5d284a6c", 00:14:58.468 "is_configured": true, 00:14:58.468 "data_offset": 0, 00:14:58.468 "data_size": 65536 00:14:58.468 }, 00:14:58.468 { 00:14:58.468 "name": null, 00:14:58.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.469 "is_configured": false, 00:14:58.469 "data_offset": 0, 00:14:58.469 "data_size": 65536 00:14:58.469 }, 00:14:58.469 { 00:14:58.469 "name": "BaseBdev3", 00:14:58.469 "uuid": "bf709167-f98f-5bd9-b141-2c234553a45a", 00:14:58.469 "is_configured": true, 00:14:58.469 "data_offset": 0, 00:14:58.469 "data_size": 65536 00:14:58.469 }, 00:14:58.469 { 00:14:58.469 "name": "BaseBdev4", 00:14:58.469 "uuid": "23758935-1d3c-5e2d-a36e-88b07c249c73", 00:14:58.469 "is_configured": true, 00:14:58.469 "data_offset": 0, 00:14:58.469 "data_size": 65536 00:14:58.469 } 00:14:58.469 ] 00:14:58.469 }' 00:14:58.469 08:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.469 08:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.737 84.25 IOPS, 252.75 MiB/s [2024-12-13T08:26:11.102Z] 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.738 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.738 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.738 [2024-12-13 08:26:11.088494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.738 [2024-12-13 08:26:11.088575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.997 00:14:58.997 Latency(us) 00:14:58.997 [2024-12-13T08:26:11.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.997 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:58.997 raid_bdev1 : 8.26 82.84 248.52 0.00 0.00 16872.84 348.79 119052.30 00:14:58.997 [2024-12-13T08:26:11.362Z] =================================================================================================================== 00:14:58.997 [2024-12-13T08:26:11.362Z] Total : 82.84 248.52 0.00 0.00 16872.84 348.79 119052.30 00:14:58.997 [2024-12-13 08:26:11.175770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.997 [2024-12-13 08:26:11.175847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.997 [2024-12-13 08:26:11.175954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.997 [2024-12-13 08:26:11.175967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:58.997 { 00:14:58.997 "results": [ 00:14:58.997 { 00:14:58.997 "job": "raid_bdev1", 00:14:58.997 "core_mask": "0x1", 00:14:58.997 "workload": "randrw", 00:14:58.997 "percentage": 50, 00:14:58.997 "status": "finished", 00:14:58.997 "queue_depth": 2, 00:14:58.997 "io_size": 3145728, 00:14:58.997 "runtime": 8.256864, 00:14:58.997 "iops": 82.84016788940693, 00:14:58.997 "mibps": 248.52050366822078, 00:14:58.997 "io_failed": 0, 00:14:58.998 "io_timeout": 0, 00:14:58.998 "avg_latency_us": 16872.84304502158, 00:14:58.998 "min_latency_us": 348.7860262008734, 00:14:58.998 "max_latency_us": 119052.29694323144 00:14:58.998 } 00:14:58.998 ], 00:14:58.998 "core_count": 1 00:14:58.998 } 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.998 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:59.258 /dev/nbd0 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.258 1+0 records in 00:14:59.258 1+0 records out 00:14:59.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286285 s, 14.3 MB/s 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.258 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:59.518 /dev/nbd1 00:14:59.518 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.518 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.519 1+0 records in 00:14:59.519 1+0 records out 00:14:59.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425806 s, 9.6 MB/s 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.519 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:59.778 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:59.778 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.778 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:59.778 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.778 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.778 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.778 08:26:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:00.038 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.038 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.038 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.038 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.038 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.038 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:00.039 /dev/nbd1 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.039 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.299 1+0 records in 00:15:00.299 1+0 records out 00:15:00.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472982 s, 8.7 MB/s 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.299 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.559 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78919 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78919 ']' 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78919 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78919 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78919' 00:15:00.819 killing process with pid 78919 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78919 00:15:00.819 Received shutdown signal, test time was about 10.102844 seconds 00:15:00.819 00:15:00.819 Latency(us) 00:15:00.819 [2024-12-13T08:26:13.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.819 [2024-12-13T08:26:13.184Z] =================================================================================================================== 00:15:00.819 [2024-12-13T08:26:13.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.819 08:26:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78919 00:15:00.819 [2024-12-13 08:26:12.994810] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.079 [2024-12-13 08:26:13.423382] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:02.460 00:15:02.460 real 0m13.579s 00:15:02.460 user 0m17.047s 00:15:02.460 sys 0m1.899s 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.460 ************************************ 00:15:02.460 END TEST raid_rebuild_test_io 00:15:02.460 ************************************ 00:15:02.460 08:26:14 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:02.460 08:26:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:02.460 08:26:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.460 08:26:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.460 ************************************ 00:15:02.460 START TEST raid_rebuild_test_sb_io 00:15:02.460 ************************************ 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79331 00:15:02.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79331 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79331 ']' 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.460 08:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.460 [2024-12-13 08:26:14.772909] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:15:02.460 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:02.460 Zero copy mechanism will not be used. 00:15:02.460 [2024-12-13 08:26:14.773163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79331 ] 00:15:02.720 [2024-12-13 08:26:14.934563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.720 [2024-12-13 08:26:15.055362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.979 [2024-12-13 08:26:15.253448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.979 [2024-12-13 08:26:15.253527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 BaseBdev1_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 [2024-12-13 08:26:15.651728] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.550 [2024-12-13 08:26:15.651879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.550 [2024-12-13 08:26:15.651909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.550 [2024-12-13 08:26:15.651923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.550 [2024-12-13 08:26:15.654129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.550 [2024-12-13 08:26:15.654168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.550 BaseBdev1 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 BaseBdev2_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 [2024-12-13 08:26:15.707728] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:03.550 [2024-12-13 08:26:15.707793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.550 [2024-12-13 08:26:15.707813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:03.550 [2024-12-13 08:26:15.707824] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.550 [2024-12-13 08:26:15.709930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.550 [2024-12-13 08:26:15.710009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:03.550 BaseBdev2 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 BaseBdev3_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 [2024-12-13 08:26:15.775034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:03.550 [2024-12-13 08:26:15.775089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.550 [2024-12-13 08:26:15.775123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:03.550 [2024-12-13 08:26:15.775134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.550 [2024-12-13 08:26:15.777210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.550 [2024-12-13 08:26:15.777246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:03.550 BaseBdev3 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 BaseBdev4_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 [2024-12-13 08:26:15.828916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:03.550 [2024-12-13 08:26:15.828977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.550 [2024-12-13 08:26:15.828997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:03.550 [2024-12-13 08:26:15.829006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.550 [2024-12-13 08:26:15.831086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.550 [2024-12-13 08:26:15.831147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:03.550 BaseBdev4 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 spare_malloc 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 spare_delay 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.550 [2024-12-13 08:26:15.893936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:03.550 [2024-12-13 08:26:15.894041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.550 [2024-12-13 08:26:15.894061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:03.550 [2024-12-13 08:26:15.894072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.550 [2024-12-13 08:26:15.896249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.550 [2024-12-13 08:26:15.896290] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.550 spare 00:15:03.550 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.551 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:03.551 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.551 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.551 [2024-12-13 08:26:15.905971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.551 [2024-12-13 08:26:15.907756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.551 [2024-12-13 08:26:15.907818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.551 [2024-12-13 08:26:15.907867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:03.551 [2024-12-13 08:26:15.908051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.551 [2024-12-13 08:26:15.908064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:03.551 [2024-12-13 08:26:15.908324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:03.551 [2024-12-13 08:26:15.908495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.551 [2024-12-13 08:26:15.908512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.551 [2024-12-13 08:26:15.908662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.811 "name": "raid_bdev1", 00:15:03.811 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:03.811 "strip_size_kb": 0, 00:15:03.811 "state": "online", 00:15:03.811 "raid_level": "raid1", 00:15:03.811 "superblock": true, 00:15:03.811 "num_base_bdevs": 4, 00:15:03.811 "num_base_bdevs_discovered": 4, 00:15:03.811 "num_base_bdevs_operational": 4, 00:15:03.811 "base_bdevs_list": [ 00:15:03.811 { 00:15:03.811 "name": "BaseBdev1", 00:15:03.811 "uuid": "775ac0c8-3ec0-5aab-b6fd-38ada2493be0", 00:15:03.811 "is_configured": true, 00:15:03.811 "data_offset": 2048, 00:15:03.811 "data_size": 63488 00:15:03.811 }, 00:15:03.811 { 00:15:03.811 "name": "BaseBdev2", 00:15:03.811 "uuid": "808f722b-65ce-5f17-9a28-ac9485d9587b", 00:15:03.811 "is_configured": true, 00:15:03.811 "data_offset": 2048, 00:15:03.811 "data_size": 63488 00:15:03.811 }, 00:15:03.811 { 00:15:03.811 "name": "BaseBdev3", 00:15:03.811 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:03.811 "is_configured": true, 00:15:03.811 "data_offset": 2048, 00:15:03.811 "data_size": 63488 00:15:03.811 }, 00:15:03.811 { 00:15:03.811 "name": "BaseBdev4", 00:15:03.811 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:03.811 "is_configured": true, 00:15:03.811 "data_offset": 2048, 00:15:03.811 "data_size": 63488 00:15:03.811 } 00:15:03.811 ] 00:15:03.811 }' 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.811 08:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.070 [2024-12-13 08:26:16.337596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.070 [2024-12-13 08:26:16.425084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.070 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.071 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.071 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.071 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.071 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.071 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.330 "name": "raid_bdev1", 00:15:04.330 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:04.330 "strip_size_kb": 0, 00:15:04.330 "state": "online", 00:15:04.330 "raid_level": "raid1", 00:15:04.330 "superblock": true, 00:15:04.330 "num_base_bdevs": 4, 00:15:04.330 "num_base_bdevs_discovered": 3, 00:15:04.330 "num_base_bdevs_operational": 3, 00:15:04.330 "base_bdevs_list": [ 00:15:04.330 { 00:15:04.330 "name": null, 00:15:04.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.330 "is_configured": false, 00:15:04.330 "data_offset": 0, 00:15:04.330 "data_size": 63488 00:15:04.330 }, 00:15:04.330 { 00:15:04.330 "name": "BaseBdev2", 00:15:04.330 "uuid": "808f722b-65ce-5f17-9a28-ac9485d9587b", 00:15:04.330 "is_configured": true, 00:15:04.330 "data_offset": 2048, 00:15:04.330 "data_size": 63488 00:15:04.330 }, 00:15:04.330 { 00:15:04.330 "name": "BaseBdev3", 00:15:04.330 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:04.330 "is_configured": true, 00:15:04.330 "data_offset": 2048, 00:15:04.330 "data_size": 63488 00:15:04.330 }, 00:15:04.330 { 00:15:04.330 "name": "BaseBdev4", 00:15:04.330 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:04.330 "is_configured": true, 00:15:04.330 "data_offset": 2048, 00:15:04.330 "data_size": 63488 00:15:04.330 } 00:15:04.330 ] 00:15:04.330 }' 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.330 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 [2024-12-13 08:26:16.528598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:04.330 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:04.330 Zero copy mechanism will not be used. 00:15:04.330 Running I/O for 60 seconds... 00:15:04.589 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.589 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.589 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.589 [2024-12-13 08:26:16.890956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.589 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.589 08:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:04.589 [2024-12-13 08:26:16.947761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:04.589 [2024-12-13 08:26:16.949785] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.848 [2024-12-13 08:26:17.058918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:04.848 [2024-12-13 08:26:17.059601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:04.848 [2024-12-13 08:26:17.174896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:04.848 [2024-12-13 08:26:17.175786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:05.463 [2024-12-13 08:26:17.493487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:05.463 179.00 IOPS, 537.00 MiB/s [2024-12-13T08:26:17.828Z] [2024-12-13 08:26:17.604508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:05.463 [2024-12-13 08:26:17.604965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.722 [2024-12-13 08:26:17.945967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.722 "name": "raid_bdev1", 00:15:05.722 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:05.722 "strip_size_kb": 0, 00:15:05.722 "state": "online", 00:15:05.722 "raid_level": "raid1", 00:15:05.722 "superblock": true, 00:15:05.722 "num_base_bdevs": 4, 00:15:05.722 "num_base_bdevs_discovered": 4, 00:15:05.722 "num_base_bdevs_operational": 4, 00:15:05.722 "process": { 00:15:05.722 "type": "rebuild", 00:15:05.722 "target": "spare", 00:15:05.722 "progress": { 00:15:05.722 "blocks": 14336, 00:15:05.722 "percent": 22 00:15:05.722 } 00:15:05.722 }, 00:15:05.722 "base_bdevs_list": [ 00:15:05.722 { 00:15:05.722 "name": "spare", 00:15:05.722 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:05.722 "is_configured": true, 00:15:05.722 "data_offset": 2048, 00:15:05.722 "data_size": 63488 00:15:05.722 }, 00:15:05.722 { 00:15:05.722 "name": "BaseBdev2", 00:15:05.722 "uuid": "808f722b-65ce-5f17-9a28-ac9485d9587b", 00:15:05.722 "is_configured": true, 00:15:05.722 "data_offset": 2048, 00:15:05.722 "data_size": 63488 00:15:05.722 }, 00:15:05.722 { 00:15:05.722 "name": "BaseBdev3", 00:15:05.722 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:05.722 "is_configured": true, 00:15:05.722 "data_offset": 2048, 00:15:05.722 "data_size": 63488 00:15:05.722 }, 00:15:05.722 { 00:15:05.722 "name": "BaseBdev4", 00:15:05.722 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:05.722 "is_configured": true, 00:15:05.722 "data_offset": 2048, 00:15:05.722 "data_size": 63488 00:15:05.722 } 00:15:05.722 ] 00:15:05.722 }' 00:15:05.722 08:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.722 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.722 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.981 [2024-12-13 08:26:18.102212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.981 [2024-12-13 08:26:18.172192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:05.981 [2024-12-13 08:26:18.191761] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.981 [2024-12-13 08:26:18.202451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.981 [2024-12-13 08:26:18.202506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.981 [2024-12-13 08:26:18.202520] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.981 [2024-12-13 08:26:18.231098] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.981 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.981 "name": "raid_bdev1", 00:15:05.981 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:05.981 "strip_size_kb": 0, 00:15:05.981 "state": "online", 00:15:05.981 "raid_level": "raid1", 00:15:05.981 "superblock": true, 00:15:05.981 "num_base_bdevs": 4, 00:15:05.982 "num_base_bdevs_discovered": 3, 00:15:05.982 "num_base_bdevs_operational": 3, 00:15:05.982 "base_bdevs_list": [ 00:15:05.982 { 00:15:05.982 "name": null, 00:15:05.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.982 "is_configured": false, 00:15:05.982 "data_offset": 0, 00:15:05.982 "data_size": 63488 00:15:05.982 }, 00:15:05.982 { 00:15:05.982 "name": "BaseBdev2", 00:15:05.982 "uuid": "808f722b-65ce-5f17-9a28-ac9485d9587b", 00:15:05.982 "is_configured": true, 00:15:05.982 "data_offset": 2048, 00:15:05.982 "data_size": 63488 00:15:05.982 }, 00:15:05.982 { 00:15:05.982 "name": "BaseBdev3", 00:15:05.982 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:05.982 "is_configured": true, 00:15:05.982 "data_offset": 2048, 00:15:05.982 "data_size": 63488 00:15:05.982 }, 00:15:05.982 { 00:15:05.982 "name": "BaseBdev4", 00:15:05.982 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:05.982 "is_configured": true, 00:15:05.982 "data_offset": 2048, 00:15:05.982 "data_size": 63488 00:15:05.982 } 00:15:05.982 ] 00:15:05.982 }' 00:15:05.982 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.982 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.500 149.50 IOPS, 448.50 MiB/s [2024-12-13T08:26:18.865Z] 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.500 "name": "raid_bdev1", 00:15:06.500 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:06.500 "strip_size_kb": 0, 00:15:06.500 "state": "online", 00:15:06.500 "raid_level": "raid1", 00:15:06.500 "superblock": true, 00:15:06.500 "num_base_bdevs": 4, 00:15:06.500 "num_base_bdevs_discovered": 3, 00:15:06.500 "num_base_bdevs_operational": 3, 00:15:06.500 "base_bdevs_list": [ 00:15:06.500 { 00:15:06.500 "name": null, 00:15:06.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.500 "is_configured": false, 00:15:06.500 "data_offset": 0, 00:15:06.500 "data_size": 63488 00:15:06.500 }, 00:15:06.500 { 00:15:06.500 "name": "BaseBdev2", 00:15:06.500 "uuid": "808f722b-65ce-5f17-9a28-ac9485d9587b", 00:15:06.500 "is_configured": true, 00:15:06.500 "data_offset": 2048, 00:15:06.500 "data_size": 63488 00:15:06.500 }, 00:15:06.500 { 00:15:06.500 "name": "BaseBdev3", 00:15:06.500 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:06.500 "is_configured": true, 00:15:06.500 "data_offset": 2048, 00:15:06.500 "data_size": 63488 00:15:06.500 }, 00:15:06.500 { 00:15:06.500 "name": "BaseBdev4", 00:15:06.500 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:06.500 "is_configured": true, 00:15:06.500 "data_offset": 2048, 00:15:06.500 "data_size": 63488 00:15:06.500 } 00:15:06.500 ] 00:15:06.500 }' 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.500 [2024-12-13 08:26:18.799003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.500 08:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.500 [2024-12-13 08:26:18.841115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:06.500 [2024-12-13 08:26:18.843090] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.759 [2024-12-13 08:26:18.959882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:06.759 [2024-12-13 08:26:18.960513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:06.759 [2024-12-13 08:26:19.079118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:06.759 [2024-12-13 08:26:19.079926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:07.326 [2024-12-13 08:26:19.407891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:07.326 [2024-12-13 08:26:19.409406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:07.326 152.00 IOPS, 456.00 MiB/s [2024-12-13T08:26:19.691Z] [2024-12-13 08:26:19.632897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.587 "name": "raid_bdev1", 00:15:07.587 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:07.587 "strip_size_kb": 0, 00:15:07.587 "state": "online", 00:15:07.587 "raid_level": "raid1", 00:15:07.587 "superblock": true, 00:15:07.587 "num_base_bdevs": 4, 00:15:07.587 "num_base_bdevs_discovered": 4, 00:15:07.587 "num_base_bdevs_operational": 4, 00:15:07.587 "process": { 00:15:07.587 "type": "rebuild", 00:15:07.587 "target": "spare", 00:15:07.587 "progress": { 00:15:07.587 "blocks": 12288, 00:15:07.587 "percent": 19 00:15:07.587 } 00:15:07.587 }, 00:15:07.587 "base_bdevs_list": [ 00:15:07.587 { 00:15:07.587 "name": "spare", 00:15:07.587 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:07.587 "is_configured": true, 00:15:07.587 "data_offset": 2048, 00:15:07.587 "data_size": 63488 00:15:07.587 }, 00:15:07.587 { 00:15:07.587 "name": "BaseBdev2", 00:15:07.587 "uuid": "808f722b-65ce-5f17-9a28-ac9485d9587b", 00:15:07.587 "is_configured": true, 00:15:07.587 "data_offset": 2048, 00:15:07.587 "data_size": 63488 00:15:07.587 }, 00:15:07.587 { 00:15:07.587 "name": "BaseBdev3", 00:15:07.587 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:07.587 "is_configured": true, 00:15:07.587 "data_offset": 2048, 00:15:07.587 "data_size": 63488 00:15:07.587 }, 00:15:07.587 { 00:15:07.587 "name": "BaseBdev4", 00:15:07.587 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:07.587 "is_configured": true, 00:15:07.587 "data_offset": 2048, 00:15:07.587 "data_size": 63488 00:15:07.587 } 00:15:07.587 ] 00:15:07.587 }' 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.587 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.847 [2024-12-13 08:26:19.966834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:07.847 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.847 08:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.847 [2024-12-13 08:26:19.998087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:07.847 [2024-12-13 08:26:20.202327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:08.107 [2024-12-13 08:26:20.315934] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:08.107 [2024-12-13 08:26:20.315983] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:08.107 [2024-12-13 08:26:20.316050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.107 "name": "raid_bdev1", 00:15:08.107 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:08.107 "strip_size_kb": 0, 00:15:08.107 "state": "online", 00:15:08.107 "raid_level": "raid1", 00:15:08.107 "superblock": true, 00:15:08.107 "num_base_bdevs": 4, 00:15:08.107 "num_base_bdevs_discovered": 3, 00:15:08.107 "num_base_bdevs_operational": 3, 00:15:08.107 "process": { 00:15:08.107 "type": "rebuild", 00:15:08.107 "target": "spare", 00:15:08.107 "progress": { 00:15:08.107 "blocks": 16384, 00:15:08.107 "percent": 25 00:15:08.107 } 00:15:08.107 }, 00:15:08.107 "base_bdevs_list": [ 00:15:08.107 { 00:15:08.107 "name": "spare", 00:15:08.107 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:08.107 "is_configured": true, 00:15:08.107 "data_offset": 2048, 00:15:08.107 "data_size": 63488 00:15:08.107 }, 00:15:08.107 { 00:15:08.107 "name": null, 00:15:08.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.107 "is_configured": false, 00:15:08.107 "data_offset": 0, 00:15:08.107 "data_size": 63488 00:15:08.107 }, 00:15:08.107 { 00:15:08.107 "name": "BaseBdev3", 00:15:08.107 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:08.107 "is_configured": true, 00:15:08.107 "data_offset": 2048, 00:15:08.107 "data_size": 63488 00:15:08.107 }, 00:15:08.107 { 00:15:08.107 "name": "BaseBdev4", 00:15:08.107 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:08.107 "is_configured": true, 00:15:08.107 "data_offset": 2048, 00:15:08.107 "data_size": 63488 00:15:08.107 } 00:15:08.107 ] 00:15:08.107 }' 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.107 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.367 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.367 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.367 "name": "raid_bdev1", 00:15:08.368 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:08.368 "strip_size_kb": 0, 00:15:08.368 "state": "online", 00:15:08.368 "raid_level": "raid1", 00:15:08.368 "superblock": true, 00:15:08.368 "num_base_bdevs": 4, 00:15:08.368 "num_base_bdevs_discovered": 3, 00:15:08.368 "num_base_bdevs_operational": 3, 00:15:08.368 "process": { 00:15:08.368 "type": "rebuild", 00:15:08.368 "target": "spare", 00:15:08.368 "progress": { 00:15:08.368 "blocks": 16384, 00:15:08.368 "percent": 25 00:15:08.368 } 00:15:08.368 }, 00:15:08.368 "base_bdevs_list": [ 00:15:08.368 { 00:15:08.368 "name": "spare", 00:15:08.368 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:08.368 "is_configured": true, 00:15:08.368 "data_offset": 2048, 00:15:08.368 "data_size": 63488 00:15:08.368 }, 00:15:08.368 { 00:15:08.368 "name": null, 00:15:08.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.368 "is_configured": false, 00:15:08.368 "data_offset": 0, 00:15:08.368 "data_size": 63488 00:15:08.368 }, 00:15:08.368 { 00:15:08.368 "name": "BaseBdev3", 00:15:08.368 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:08.368 "is_configured": true, 00:15:08.368 "data_offset": 2048, 00:15:08.368 "data_size": 63488 00:15:08.368 }, 00:15:08.368 { 00:15:08.368 "name": "BaseBdev4", 00:15:08.368 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:08.368 "is_configured": true, 00:15:08.368 "data_offset": 2048, 00:15:08.368 "data_size": 63488 00:15:08.368 } 00:15:08.368 ] 00:15:08.368 }' 00:15:08.368 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.368 126.50 IOPS, 379.50 MiB/s [2024-12-13T08:26:20.733Z] 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.368 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.368 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.368 08:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.368 [2024-12-13 08:26:20.637060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:09.306 113.80 IOPS, 341.40 MiB/s [2024-12-13T08:26:21.671Z] [2024-12-13 08:26:21.568568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:09.306 [2024-12-13 08:26:21.581875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.306 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.306 "name": "raid_bdev1", 00:15:09.306 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:09.307 "strip_size_kb": 0, 00:15:09.307 "state": "online", 00:15:09.307 "raid_level": "raid1", 00:15:09.307 "superblock": true, 00:15:09.307 "num_base_bdevs": 4, 00:15:09.307 "num_base_bdevs_discovered": 3, 00:15:09.307 "num_base_bdevs_operational": 3, 00:15:09.307 "process": { 00:15:09.307 "type": "rebuild", 00:15:09.307 "target": "spare", 00:15:09.307 "progress": { 00:15:09.307 "blocks": 34816, 00:15:09.307 "percent": 54 00:15:09.307 } 00:15:09.307 }, 00:15:09.307 "base_bdevs_list": [ 00:15:09.307 { 00:15:09.307 "name": "spare", 00:15:09.307 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:09.307 "is_configured": true, 00:15:09.307 "data_offset": 2048, 00:15:09.307 "data_size": 63488 00:15:09.307 }, 00:15:09.307 { 00:15:09.307 "name": null, 00:15:09.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.307 "is_configured": false, 00:15:09.307 "data_offset": 0, 00:15:09.307 "data_size": 63488 00:15:09.307 }, 00:15:09.307 { 00:15:09.307 "name": "BaseBdev3", 00:15:09.307 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:09.307 "is_configured": true, 00:15:09.307 "data_offset": 2048, 00:15:09.307 "data_size": 63488 00:15:09.307 }, 00:15:09.307 { 00:15:09.307 "name": "BaseBdev4", 00:15:09.307 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:09.307 "is_configured": true, 00:15:09.307 "data_offset": 2048, 00:15:09.307 "data_size": 63488 00:15:09.307 } 00:15:09.307 ] 00:15:09.307 }' 00:15:09.566 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.566 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.566 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.566 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.566 08:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.566 [2024-12-13 08:26:21.924176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:09.566 [2024-12-13 08:26:21.925108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:09.825 [2024-12-13 08:26:22.127824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:10.085 [2024-12-13 08:26:22.344889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:10.344 102.50 IOPS, 307.50 MiB/s [2024-12-13T08:26:22.709Z] [2024-12-13 08:26:22.559880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:10.603 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.603 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.603 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.603 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.604 "name": "raid_bdev1", 00:15:10.604 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:10.604 "strip_size_kb": 0, 00:15:10.604 "state": "online", 00:15:10.604 "raid_level": "raid1", 00:15:10.604 "superblock": true, 00:15:10.604 "num_base_bdevs": 4, 00:15:10.604 "num_base_bdevs_discovered": 3, 00:15:10.604 "num_base_bdevs_operational": 3, 00:15:10.604 "process": { 00:15:10.604 "type": "rebuild", 00:15:10.604 "target": "spare", 00:15:10.604 "progress": { 00:15:10.604 "blocks": 47104, 00:15:10.604 "percent": 74 00:15:10.604 } 00:15:10.604 }, 00:15:10.604 "base_bdevs_list": [ 00:15:10.604 { 00:15:10.604 "name": "spare", 00:15:10.604 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:10.604 "is_configured": true, 00:15:10.604 "data_offset": 2048, 00:15:10.604 "data_size": 63488 00:15:10.604 }, 00:15:10.604 { 00:15:10.604 "name": null, 00:15:10.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.604 "is_configured": false, 00:15:10.604 "data_offset": 0, 00:15:10.604 "data_size": 63488 00:15:10.604 }, 00:15:10.604 { 00:15:10.604 "name": "BaseBdev3", 00:15:10.604 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:10.604 "is_configured": true, 00:15:10.604 "data_offset": 2048, 00:15:10.604 "data_size": 63488 00:15:10.604 }, 00:15:10.604 { 00:15:10.604 "name": "BaseBdev4", 00:15:10.604 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:10.604 "is_configured": true, 00:15:10.604 "data_offset": 2048, 00:15:10.604 "data_size": 63488 00:15:10.604 } 00:15:10.604 ] 00:15:10.604 }' 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.604 08:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.863 [2024-12-13 08:26:23.003024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:11.125 [2024-12-13 08:26:23.333067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:11.387 93.57 IOPS, 280.71 MiB/s [2024-12-13T08:26:23.752Z] [2024-12-13 08:26:23.658139] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:11.646 [2024-12-13 08:26:23.757988] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:11.646 [2024-12-13 08:26:23.760379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.646 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.646 "name": "raid_bdev1", 00:15:11.646 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:11.646 "strip_size_kb": 0, 00:15:11.646 "state": "online", 00:15:11.646 "raid_level": "raid1", 00:15:11.646 "superblock": true, 00:15:11.646 "num_base_bdevs": 4, 00:15:11.646 "num_base_bdevs_discovered": 3, 00:15:11.646 "num_base_bdevs_operational": 3, 00:15:11.646 "base_bdevs_list": [ 00:15:11.646 { 00:15:11.646 "name": "spare", 00:15:11.646 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:11.646 "is_configured": true, 00:15:11.646 "data_offset": 2048, 00:15:11.646 "data_size": 63488 00:15:11.646 }, 00:15:11.646 { 00:15:11.646 "name": null, 00:15:11.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.646 "is_configured": false, 00:15:11.646 "data_offset": 0, 00:15:11.646 "data_size": 63488 00:15:11.646 }, 00:15:11.646 { 00:15:11.646 "name": "BaseBdev3", 00:15:11.647 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:11.647 "is_configured": true, 00:15:11.647 "data_offset": 2048, 00:15:11.647 "data_size": 63488 00:15:11.647 }, 00:15:11.647 { 00:15:11.647 "name": "BaseBdev4", 00:15:11.647 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:11.647 "is_configured": true, 00:15:11.647 "data_offset": 2048, 00:15:11.647 "data_size": 63488 00:15:11.647 } 00:15:11.647 ] 00:15:11.647 }' 00:15:11.647 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.647 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.647 08:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.907 "name": "raid_bdev1", 00:15:11.907 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:11.907 "strip_size_kb": 0, 00:15:11.907 "state": "online", 00:15:11.907 "raid_level": "raid1", 00:15:11.907 "superblock": true, 00:15:11.907 "num_base_bdevs": 4, 00:15:11.907 "num_base_bdevs_discovered": 3, 00:15:11.907 "num_base_bdevs_operational": 3, 00:15:11.907 "base_bdevs_list": [ 00:15:11.907 { 00:15:11.907 "name": "spare", 00:15:11.907 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:11.907 "is_configured": true, 00:15:11.907 "data_offset": 2048, 00:15:11.907 "data_size": 63488 00:15:11.907 }, 00:15:11.907 { 00:15:11.907 "name": null, 00:15:11.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.907 "is_configured": false, 00:15:11.907 "data_offset": 0, 00:15:11.907 "data_size": 63488 00:15:11.907 }, 00:15:11.907 { 00:15:11.907 "name": "BaseBdev3", 00:15:11.907 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:11.907 "is_configured": true, 00:15:11.907 "data_offset": 2048, 00:15:11.907 "data_size": 63488 00:15:11.907 }, 00:15:11.907 { 00:15:11.907 "name": "BaseBdev4", 00:15:11.907 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:11.907 "is_configured": true, 00:15:11.907 "data_offset": 2048, 00:15:11.907 "data_size": 63488 00:15:11.907 } 00:15:11.907 ] 00:15:11.907 }' 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.907 "name": "raid_bdev1", 00:15:11.907 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:11.907 "strip_size_kb": 0, 00:15:11.907 "state": "online", 00:15:11.907 "raid_level": "raid1", 00:15:11.907 "superblock": true, 00:15:11.907 "num_base_bdevs": 4, 00:15:11.907 "num_base_bdevs_discovered": 3, 00:15:11.907 "num_base_bdevs_operational": 3, 00:15:11.907 "base_bdevs_list": [ 00:15:11.907 { 00:15:11.907 "name": "spare", 00:15:11.907 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:11.907 "is_configured": true, 00:15:11.907 "data_offset": 2048, 00:15:11.907 "data_size": 63488 00:15:11.907 }, 00:15:11.907 { 00:15:11.907 "name": null, 00:15:11.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.907 "is_configured": false, 00:15:11.907 "data_offset": 0, 00:15:11.907 "data_size": 63488 00:15:11.907 }, 00:15:11.907 { 00:15:11.907 "name": "BaseBdev3", 00:15:11.907 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:11.907 "is_configured": true, 00:15:11.907 "data_offset": 2048, 00:15:11.907 "data_size": 63488 00:15:11.907 }, 00:15:11.907 { 00:15:11.907 "name": "BaseBdev4", 00:15:11.907 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:11.907 "is_configured": true, 00:15:11.907 "data_offset": 2048, 00:15:11.907 "data_size": 63488 00:15:11.907 } 00:15:11.907 ] 00:15:11.907 }' 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.907 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.426 86.38 IOPS, 259.12 MiB/s [2024-12-13T08:26:24.791Z] 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.426 [2024-12-13 08:26:24.644357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.426 [2024-12-13 08:26:24.644458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.426 00:15:12.426 Latency(us) 00:15:12.426 [2024-12-13T08:26:24.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.426 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:12.426 raid_bdev1 : 8.23 84.94 254.82 0.00 0.00 16658.76 296.92 118136.51 00:15:12.426 [2024-12-13T08:26:24.791Z] =================================================================================================================== 00:15:12.426 [2024-12-13T08:26:24.791Z] Total : 84.94 254.82 0.00 0.00 16658.76 296.92 118136.51 00:15:12.426 [2024-12-13 08:26:24.766894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.426 [2024-12-13 08:26:24.767040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.426 [2024-12-13 08:26:24.767202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.426 [2024-12-13 08:26:24.767269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:12.426 { 00:15:12.426 "results": [ 00:15:12.426 { 00:15:12.426 "job": "raid_bdev1", 00:15:12.426 "core_mask": "0x1", 00:15:12.426 "workload": "randrw", 00:15:12.426 "percentage": 50, 00:15:12.426 "status": "finished", 00:15:12.426 "queue_depth": 2, 00:15:12.426 "io_size": 3145728, 00:15:12.426 "runtime": 8.229212, 00:15:12.426 "iops": 84.94130422208104, 00:15:12.426 "mibps": 254.82391266624313, 00:15:12.426 "io_failed": 0, 00:15:12.426 "io_timeout": 0, 00:15:12.426 "avg_latency_us": 16658.76191440049, 00:15:12.426 "min_latency_us": 296.91528384279474, 00:15:12.426 "max_latency_us": 118136.51004366812 00:15:12.426 } 00:15:12.426 ], 00:15:12.426 "core_count": 1 00:15:12.426 } 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.426 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.686 08:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:12.686 /dev/nbd0 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.947 1+0 records in 00:15:12.947 1+0 records out 00:15:12.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503282 s, 8.1 MB/s 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.947 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:12.947 /dev/nbd1 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.206 1+0 records in 00:15:13.206 1+0 records out 00:15:13.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354382 s, 11.6 MB/s 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:13.206 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.207 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:13.465 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.466 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:13.466 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.466 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:13.466 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.466 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.466 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:13.725 /dev/nbd1 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.725 08:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.725 1+0 records in 00:15:13.725 1+0 records out 00:15:13.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354107 s, 11.6 MB/s 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.725 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.984 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:14.243 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.243 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.243 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.243 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.243 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.243 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.243 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.244 [2024-12-13 08:26:26.568755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.244 [2024-12-13 08:26:26.568804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.244 [2024-12-13 08:26:26.568825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:14.244 [2024-12-13 08:26:26.568834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.244 [2024-12-13 08:26:26.571002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.244 [2024-12-13 08:26:26.571107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.244 [2024-12-13 08:26:26.571214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:14.244 [2024-12-13 08:26:26.571293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.244 [2024-12-13 08:26:26.571456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.244 [2024-12-13 08:26:26.571562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:14.244 spare 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.244 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.503 [2024-12-13 08:26:26.671461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:14.503 [2024-12-13 08:26:26.671491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.503 [2024-12-13 08:26:26.671799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:14.503 [2024-12-13 08:26:26.671990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:14.503 [2024-12-13 08:26:26.672004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:14.503 [2024-12-13 08:26:26.672205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.503 "name": "raid_bdev1", 00:15:14.503 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:14.503 "strip_size_kb": 0, 00:15:14.503 "state": "online", 00:15:14.503 "raid_level": "raid1", 00:15:14.503 "superblock": true, 00:15:14.503 "num_base_bdevs": 4, 00:15:14.503 "num_base_bdevs_discovered": 3, 00:15:14.503 "num_base_bdevs_operational": 3, 00:15:14.503 "base_bdevs_list": [ 00:15:14.503 { 00:15:14.503 "name": "spare", 00:15:14.503 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:14.503 "is_configured": true, 00:15:14.503 "data_offset": 2048, 00:15:14.503 "data_size": 63488 00:15:14.503 }, 00:15:14.503 { 00:15:14.503 "name": null, 00:15:14.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.503 "is_configured": false, 00:15:14.503 "data_offset": 2048, 00:15:14.503 "data_size": 63488 00:15:14.503 }, 00:15:14.503 { 00:15:14.503 "name": "BaseBdev3", 00:15:14.503 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:14.503 "is_configured": true, 00:15:14.503 "data_offset": 2048, 00:15:14.503 "data_size": 63488 00:15:14.503 }, 00:15:14.503 { 00:15:14.503 "name": "BaseBdev4", 00:15:14.503 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:14.503 "is_configured": true, 00:15:14.503 "data_offset": 2048, 00:15:14.503 "data_size": 63488 00:15:14.503 } 00:15:14.503 ] 00:15:14.503 }' 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.503 08:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.762 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.022 "name": "raid_bdev1", 00:15:15.022 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:15.022 "strip_size_kb": 0, 00:15:15.022 "state": "online", 00:15:15.022 "raid_level": "raid1", 00:15:15.022 "superblock": true, 00:15:15.022 "num_base_bdevs": 4, 00:15:15.022 "num_base_bdevs_discovered": 3, 00:15:15.022 "num_base_bdevs_operational": 3, 00:15:15.022 "base_bdevs_list": [ 00:15:15.022 { 00:15:15.022 "name": "spare", 00:15:15.022 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:15.022 "is_configured": true, 00:15:15.022 "data_offset": 2048, 00:15:15.022 "data_size": 63488 00:15:15.022 }, 00:15:15.022 { 00:15:15.022 "name": null, 00:15:15.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.022 "is_configured": false, 00:15:15.022 "data_offset": 2048, 00:15:15.022 "data_size": 63488 00:15:15.022 }, 00:15:15.022 { 00:15:15.022 "name": "BaseBdev3", 00:15:15.022 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:15.022 "is_configured": true, 00:15:15.022 "data_offset": 2048, 00:15:15.022 "data_size": 63488 00:15:15.022 }, 00:15:15.022 { 00:15:15.022 "name": "BaseBdev4", 00:15:15.022 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:15.022 "is_configured": true, 00:15:15.022 "data_offset": 2048, 00:15:15.022 "data_size": 63488 00:15:15.022 } 00:15:15.022 ] 00:15:15.022 }' 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.022 [2024-12-13 08:26:27.279717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.022 "name": "raid_bdev1", 00:15:15.022 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:15.022 "strip_size_kb": 0, 00:15:15.022 "state": "online", 00:15:15.022 "raid_level": "raid1", 00:15:15.022 "superblock": true, 00:15:15.022 "num_base_bdevs": 4, 00:15:15.022 "num_base_bdevs_discovered": 2, 00:15:15.022 "num_base_bdevs_operational": 2, 00:15:15.022 "base_bdevs_list": [ 00:15:15.022 { 00:15:15.022 "name": null, 00:15:15.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.022 "is_configured": false, 00:15:15.022 "data_offset": 0, 00:15:15.022 "data_size": 63488 00:15:15.022 }, 00:15:15.022 { 00:15:15.022 "name": null, 00:15:15.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.022 "is_configured": false, 00:15:15.022 "data_offset": 2048, 00:15:15.022 "data_size": 63488 00:15:15.022 }, 00:15:15.022 { 00:15:15.022 "name": "BaseBdev3", 00:15:15.022 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:15.022 "is_configured": true, 00:15:15.022 "data_offset": 2048, 00:15:15.022 "data_size": 63488 00:15:15.022 }, 00:15:15.022 { 00:15:15.022 "name": "BaseBdev4", 00:15:15.022 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:15.022 "is_configured": true, 00:15:15.022 "data_offset": 2048, 00:15:15.022 "data_size": 63488 00:15:15.022 } 00:15:15.022 ] 00:15:15.022 }' 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.022 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.590 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.590 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.590 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.590 [2024-12-13 08:26:27.707120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.590 [2024-12-13 08:26:27.707408] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:15.591 [2024-12-13 08:26:27.707430] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.591 [2024-12-13 08:26:27.707471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.591 [2024-12-13 08:26:27.723513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:15.591 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.591 08:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:15.591 [2024-12-13 08:26:27.725444] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.527 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.527 "name": "raid_bdev1", 00:15:16.527 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:16.527 "strip_size_kb": 0, 00:15:16.527 "state": "online", 00:15:16.527 "raid_level": "raid1", 00:15:16.527 "superblock": true, 00:15:16.527 "num_base_bdevs": 4, 00:15:16.527 "num_base_bdevs_discovered": 3, 00:15:16.527 "num_base_bdevs_operational": 3, 00:15:16.527 "process": { 00:15:16.527 "type": "rebuild", 00:15:16.527 "target": "spare", 00:15:16.527 "progress": { 00:15:16.527 "blocks": 20480, 00:15:16.527 "percent": 32 00:15:16.527 } 00:15:16.527 }, 00:15:16.527 "base_bdevs_list": [ 00:15:16.527 { 00:15:16.528 "name": "spare", 00:15:16.528 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:16.528 "is_configured": true, 00:15:16.528 "data_offset": 2048, 00:15:16.528 "data_size": 63488 00:15:16.528 }, 00:15:16.528 { 00:15:16.528 "name": null, 00:15:16.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.528 "is_configured": false, 00:15:16.528 "data_offset": 2048, 00:15:16.528 "data_size": 63488 00:15:16.528 }, 00:15:16.528 { 00:15:16.528 "name": "BaseBdev3", 00:15:16.528 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:16.528 "is_configured": true, 00:15:16.528 "data_offset": 2048, 00:15:16.528 "data_size": 63488 00:15:16.528 }, 00:15:16.528 { 00:15:16.528 "name": "BaseBdev4", 00:15:16.528 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:16.528 "is_configured": true, 00:15:16.528 "data_offset": 2048, 00:15:16.528 "data_size": 63488 00:15:16.528 } 00:15:16.528 ] 00:15:16.528 }' 00:15:16.528 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.528 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.528 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.528 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.528 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.528 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.528 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.528 [2024-12-13 08:26:28.889027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.787 [2024-12-13 08:26:28.931247] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.787 [2024-12-13 08:26:28.931375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.787 [2024-12-13 08:26:28.931397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.787 [2024-12-13 08:26:28.931405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.787 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.788 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.788 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.788 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.788 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.788 08:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.788 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.788 "name": "raid_bdev1", 00:15:16.788 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:16.788 "strip_size_kb": 0, 00:15:16.788 "state": "online", 00:15:16.788 "raid_level": "raid1", 00:15:16.788 "superblock": true, 00:15:16.788 "num_base_bdevs": 4, 00:15:16.788 "num_base_bdevs_discovered": 2, 00:15:16.788 "num_base_bdevs_operational": 2, 00:15:16.788 "base_bdevs_list": [ 00:15:16.788 { 00:15:16.788 "name": null, 00:15:16.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.788 "is_configured": false, 00:15:16.788 "data_offset": 0, 00:15:16.788 "data_size": 63488 00:15:16.788 }, 00:15:16.788 { 00:15:16.788 "name": null, 00:15:16.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.788 "is_configured": false, 00:15:16.788 "data_offset": 2048, 00:15:16.788 "data_size": 63488 00:15:16.788 }, 00:15:16.788 { 00:15:16.788 "name": "BaseBdev3", 00:15:16.788 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:16.788 "is_configured": true, 00:15:16.788 "data_offset": 2048, 00:15:16.788 "data_size": 63488 00:15:16.788 }, 00:15:16.788 { 00:15:16.788 "name": "BaseBdev4", 00:15:16.788 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:16.788 "is_configured": true, 00:15:16.788 "data_offset": 2048, 00:15:16.788 "data_size": 63488 00:15:16.788 } 00:15:16.788 ] 00:15:16.788 }' 00:15:16.788 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.788 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.356 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:17.356 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.356 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.356 [2024-12-13 08:26:29.416662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:17.356 [2024-12-13 08:26:29.416771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.356 [2024-12-13 08:26:29.416817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:17.356 [2024-12-13 08:26:29.416847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.356 [2024-12-13 08:26:29.417370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.356 [2024-12-13 08:26:29.417430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:17.356 [2024-12-13 08:26:29.417561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:17.356 [2024-12-13 08:26:29.417601] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:17.356 [2024-12-13 08:26:29.417645] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:17.356 [2024-12-13 08:26:29.417688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.356 [2024-12-13 08:26:29.432582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:17.356 spare 00:15:17.356 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.356 08:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:17.356 [2024-12-13 08:26:29.434623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.294 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.294 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.294 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.294 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.294 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.295 "name": "raid_bdev1", 00:15:18.295 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:18.295 "strip_size_kb": 0, 00:15:18.295 "state": "online", 00:15:18.295 "raid_level": "raid1", 00:15:18.295 "superblock": true, 00:15:18.295 "num_base_bdevs": 4, 00:15:18.295 "num_base_bdevs_discovered": 3, 00:15:18.295 "num_base_bdevs_operational": 3, 00:15:18.295 "process": { 00:15:18.295 "type": "rebuild", 00:15:18.295 "target": "spare", 00:15:18.295 "progress": { 00:15:18.295 "blocks": 20480, 00:15:18.295 "percent": 32 00:15:18.295 } 00:15:18.295 }, 00:15:18.295 "base_bdevs_list": [ 00:15:18.295 { 00:15:18.295 "name": "spare", 00:15:18.295 "uuid": "9844a2f3-9685-5b1f-904e-eefa174052f9", 00:15:18.295 "is_configured": true, 00:15:18.295 "data_offset": 2048, 00:15:18.295 "data_size": 63488 00:15:18.295 }, 00:15:18.295 { 00:15:18.295 "name": null, 00:15:18.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.295 "is_configured": false, 00:15:18.295 "data_offset": 2048, 00:15:18.295 "data_size": 63488 00:15:18.295 }, 00:15:18.295 { 00:15:18.295 "name": "BaseBdev3", 00:15:18.295 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:18.295 "is_configured": true, 00:15:18.295 "data_offset": 2048, 00:15:18.295 "data_size": 63488 00:15:18.295 }, 00:15:18.295 { 00:15:18.295 "name": "BaseBdev4", 00:15:18.295 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:18.295 "is_configured": true, 00:15:18.295 "data_offset": 2048, 00:15:18.295 "data_size": 63488 00:15:18.295 } 00:15:18.295 ] 00:15:18.295 }' 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.295 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.295 [2024-12-13 08:26:30.558340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.295 [2024-12-13 08:26:30.640581] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:18.295 [2024-12-13 08:26:30.640711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.295 [2024-12-13 08:26:30.640748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.295 [2024-12-13 08:26:30.640771] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.555 "name": "raid_bdev1", 00:15:18.555 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:18.555 "strip_size_kb": 0, 00:15:18.555 "state": "online", 00:15:18.555 "raid_level": "raid1", 00:15:18.555 "superblock": true, 00:15:18.555 "num_base_bdevs": 4, 00:15:18.555 "num_base_bdevs_discovered": 2, 00:15:18.555 "num_base_bdevs_operational": 2, 00:15:18.555 "base_bdevs_list": [ 00:15:18.555 { 00:15:18.555 "name": null, 00:15:18.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.555 "is_configured": false, 00:15:18.555 "data_offset": 0, 00:15:18.555 "data_size": 63488 00:15:18.555 }, 00:15:18.555 { 00:15:18.555 "name": null, 00:15:18.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.555 "is_configured": false, 00:15:18.555 "data_offset": 2048, 00:15:18.555 "data_size": 63488 00:15:18.555 }, 00:15:18.555 { 00:15:18.555 "name": "BaseBdev3", 00:15:18.555 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:18.555 "is_configured": true, 00:15:18.555 "data_offset": 2048, 00:15:18.555 "data_size": 63488 00:15:18.555 }, 00:15:18.555 { 00:15:18.555 "name": "BaseBdev4", 00:15:18.555 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:18.555 "is_configured": true, 00:15:18.555 "data_offset": 2048, 00:15:18.555 "data_size": 63488 00:15:18.555 } 00:15:18.555 ] 00:15:18.555 }' 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.555 08:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.815 "name": "raid_bdev1", 00:15:18.815 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:18.815 "strip_size_kb": 0, 00:15:18.815 "state": "online", 00:15:18.815 "raid_level": "raid1", 00:15:18.815 "superblock": true, 00:15:18.815 "num_base_bdevs": 4, 00:15:18.815 "num_base_bdevs_discovered": 2, 00:15:18.815 "num_base_bdevs_operational": 2, 00:15:18.815 "base_bdevs_list": [ 00:15:18.815 { 00:15:18.815 "name": null, 00:15:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.815 "is_configured": false, 00:15:18.815 "data_offset": 0, 00:15:18.815 "data_size": 63488 00:15:18.815 }, 00:15:18.815 { 00:15:18.815 "name": null, 00:15:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.815 "is_configured": false, 00:15:18.815 "data_offset": 2048, 00:15:18.815 "data_size": 63488 00:15:18.815 }, 00:15:18.815 { 00:15:18.815 "name": "BaseBdev3", 00:15:18.815 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:18.815 "is_configured": true, 00:15:18.815 "data_offset": 2048, 00:15:18.815 "data_size": 63488 00:15:18.815 }, 00:15:18.815 { 00:15:18.815 "name": "BaseBdev4", 00:15:18.815 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:18.815 "is_configured": true, 00:15:18.815 "data_offset": 2048, 00:15:18.815 "data_size": 63488 00:15:18.815 } 00:15:18.815 ] 00:15:18.815 }' 00:15:18.815 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.074 [2024-12-13 08:26:31.312643] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.074 [2024-12-13 08:26:31.312708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.074 [2024-12-13 08:26:31.312729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:19.074 [2024-12-13 08:26:31.312740] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.074 [2024-12-13 08:26:31.313212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.074 [2024-12-13 08:26:31.313233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.074 [2024-12-13 08:26:31.313316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:19.074 [2024-12-13 08:26:31.313335] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:19.074 [2024-12-13 08:26:31.313346] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.074 [2024-12-13 08:26:31.313360] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:19.074 BaseBdev1 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.074 08:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.012 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.272 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.272 "name": "raid_bdev1", 00:15:20.272 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:20.272 "strip_size_kb": 0, 00:15:20.272 "state": "online", 00:15:20.272 "raid_level": "raid1", 00:15:20.272 "superblock": true, 00:15:20.272 "num_base_bdevs": 4, 00:15:20.272 "num_base_bdevs_discovered": 2, 00:15:20.272 "num_base_bdevs_operational": 2, 00:15:20.272 "base_bdevs_list": [ 00:15:20.272 { 00:15:20.272 "name": null, 00:15:20.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.272 "is_configured": false, 00:15:20.272 "data_offset": 0, 00:15:20.272 "data_size": 63488 00:15:20.272 }, 00:15:20.272 { 00:15:20.272 "name": null, 00:15:20.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.272 "is_configured": false, 00:15:20.272 "data_offset": 2048, 00:15:20.272 "data_size": 63488 00:15:20.272 }, 00:15:20.272 { 00:15:20.272 "name": "BaseBdev3", 00:15:20.272 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:20.272 "is_configured": true, 00:15:20.272 "data_offset": 2048, 00:15:20.272 "data_size": 63488 00:15:20.272 }, 00:15:20.272 { 00:15:20.272 "name": "BaseBdev4", 00:15:20.272 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:20.272 "is_configured": true, 00:15:20.272 "data_offset": 2048, 00:15:20.272 "data_size": 63488 00:15:20.272 } 00:15:20.272 ] 00:15:20.272 }' 00:15:20.272 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.272 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.532 "name": "raid_bdev1", 00:15:20.532 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:20.532 "strip_size_kb": 0, 00:15:20.532 "state": "online", 00:15:20.532 "raid_level": "raid1", 00:15:20.532 "superblock": true, 00:15:20.532 "num_base_bdevs": 4, 00:15:20.532 "num_base_bdevs_discovered": 2, 00:15:20.532 "num_base_bdevs_operational": 2, 00:15:20.532 "base_bdevs_list": [ 00:15:20.532 { 00:15:20.532 "name": null, 00:15:20.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.532 "is_configured": false, 00:15:20.532 "data_offset": 0, 00:15:20.532 "data_size": 63488 00:15:20.532 }, 00:15:20.532 { 00:15:20.532 "name": null, 00:15:20.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.532 "is_configured": false, 00:15:20.532 "data_offset": 2048, 00:15:20.532 "data_size": 63488 00:15:20.532 }, 00:15:20.532 { 00:15:20.532 "name": "BaseBdev3", 00:15:20.532 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:20.532 "is_configured": true, 00:15:20.532 "data_offset": 2048, 00:15:20.532 "data_size": 63488 00:15:20.532 }, 00:15:20.532 { 00:15:20.532 "name": "BaseBdev4", 00:15:20.532 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:20.532 "is_configured": true, 00:15:20.532 "data_offset": 2048, 00:15:20.532 "data_size": 63488 00:15:20.532 } 00:15:20.532 ] 00:15:20.532 }' 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.532 [2024-12-13 08:26:32.854282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.532 [2024-12-13 08:26:32.854468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:20.532 [2024-12-13 08:26:32.854481] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:20.532 request: 00:15:20.532 { 00:15:20.532 "base_bdev": "BaseBdev1", 00:15:20.532 "raid_bdev": "raid_bdev1", 00:15:20.532 "method": "bdev_raid_add_base_bdev", 00:15:20.532 "req_id": 1 00:15:20.532 } 00:15:20.532 Got JSON-RPC error response 00:15:20.532 response: 00:15:20.532 { 00:15:20.532 "code": -22, 00:15:20.532 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:20.532 } 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.532 08:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.912 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.912 "name": "raid_bdev1", 00:15:21.912 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:21.912 "strip_size_kb": 0, 00:15:21.912 "state": "online", 00:15:21.912 "raid_level": "raid1", 00:15:21.912 "superblock": true, 00:15:21.912 "num_base_bdevs": 4, 00:15:21.912 "num_base_bdevs_discovered": 2, 00:15:21.912 "num_base_bdevs_operational": 2, 00:15:21.912 "base_bdevs_list": [ 00:15:21.912 { 00:15:21.912 "name": null, 00:15:21.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.912 "is_configured": false, 00:15:21.912 "data_offset": 0, 00:15:21.912 "data_size": 63488 00:15:21.912 }, 00:15:21.912 { 00:15:21.913 "name": null, 00:15:21.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.913 "is_configured": false, 00:15:21.913 "data_offset": 2048, 00:15:21.913 "data_size": 63488 00:15:21.913 }, 00:15:21.913 { 00:15:21.913 "name": "BaseBdev3", 00:15:21.913 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:21.913 "is_configured": true, 00:15:21.913 "data_offset": 2048, 00:15:21.913 "data_size": 63488 00:15:21.913 }, 00:15:21.913 { 00:15:21.913 "name": "BaseBdev4", 00:15:21.913 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:21.913 "is_configured": true, 00:15:21.913 "data_offset": 2048, 00:15:21.913 "data_size": 63488 00:15:21.913 } 00:15:21.913 ] 00:15:21.913 }' 00:15:21.913 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.913 08:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.172 "name": "raid_bdev1", 00:15:22.172 "uuid": "27e3121a-28f7-487c-b1c6-23a8e4627697", 00:15:22.172 "strip_size_kb": 0, 00:15:22.172 "state": "online", 00:15:22.172 "raid_level": "raid1", 00:15:22.172 "superblock": true, 00:15:22.172 "num_base_bdevs": 4, 00:15:22.172 "num_base_bdevs_discovered": 2, 00:15:22.172 "num_base_bdevs_operational": 2, 00:15:22.172 "base_bdevs_list": [ 00:15:22.172 { 00:15:22.172 "name": null, 00:15:22.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.172 "is_configured": false, 00:15:22.172 "data_offset": 0, 00:15:22.172 "data_size": 63488 00:15:22.172 }, 00:15:22.172 { 00:15:22.172 "name": null, 00:15:22.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.172 "is_configured": false, 00:15:22.172 "data_offset": 2048, 00:15:22.172 "data_size": 63488 00:15:22.172 }, 00:15:22.172 { 00:15:22.172 "name": "BaseBdev3", 00:15:22.172 "uuid": "d4d9e4c3-adc6-55ca-b590-0c3310de891a", 00:15:22.172 "is_configured": true, 00:15:22.172 "data_offset": 2048, 00:15:22.172 "data_size": 63488 00:15:22.172 }, 00:15:22.172 { 00:15:22.172 "name": "BaseBdev4", 00:15:22.172 "uuid": "9aa1bc68-ef97-5230-a4f0-2e0173a46235", 00:15:22.172 "is_configured": true, 00:15:22.172 "data_offset": 2048, 00:15:22.172 "data_size": 63488 00:15:22.172 } 00:15:22.172 ] 00:15:22.172 }' 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79331 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79331 ']' 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79331 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:22.172 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.173 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79331 00:15:22.173 killing process with pid 79331 00:15:22.173 Received shutdown signal, test time was about 18.005599 seconds 00:15:22.173 00:15:22.173 Latency(us) 00:15:22.173 [2024-12-13T08:26:34.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.173 [2024-12-13T08:26:34.538Z] =================================================================================================================== 00:15:22.173 [2024-12-13T08:26:34.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:22.173 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.173 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.173 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79331' 00:15:22.173 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79331 00:15:22.173 08:26:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79331 00:15:22.173 [2024-12-13 08:26:34.501722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.173 [2024-12-13 08:26:34.501861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.173 [2024-12-13 08:26:34.501944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.173 [2024-12-13 08:26:34.501955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:22.741 [2024-12-13 08:26:34.926186] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.117 08:26:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:24.117 00:15:24.117 real 0m21.459s 00:15:24.117 user 0m27.985s 00:15:24.117 sys 0m2.586s 00:15:24.117 08:26:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.117 08:26:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.117 ************************************ 00:15:24.117 END TEST raid_rebuild_test_sb_io 00:15:24.117 ************************************ 00:15:24.117 08:26:36 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:24.117 08:26:36 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:24.117 08:26:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:24.118 08:26:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.118 08:26:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.118 ************************************ 00:15:24.118 START TEST raid5f_state_function_test 00:15:24.118 ************************************ 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:24.118 Process raid pid: 80054 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80054 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80054' 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80054 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80054 ']' 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:24.118 08:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.118 [2024-12-13 08:26:36.288793] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:15:24.118 [2024-12-13 08:26:36.288913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.118 [2024-12-13 08:26:36.467028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.384 [2024-12-13 08:26:36.587811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.654 [2024-12-13 08:26:36.794361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.654 [2024-12-13 08:26:36.794403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.920 [2024-12-13 08:26:37.147966] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.920 [2024-12-13 08:26:37.148086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.920 [2024-12-13 08:26:37.148111] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.920 [2024-12-13 08:26:37.148122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.920 [2024-12-13 08:26:37.148129] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.920 [2024-12-13 08:26:37.148138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.920 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.920 "name": "Existed_Raid", 00:15:24.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.920 "strip_size_kb": 64, 00:15:24.920 "state": "configuring", 00:15:24.920 "raid_level": "raid5f", 00:15:24.920 "superblock": false, 00:15:24.920 "num_base_bdevs": 3, 00:15:24.920 "num_base_bdevs_discovered": 0, 00:15:24.920 "num_base_bdevs_operational": 3, 00:15:24.920 "base_bdevs_list": [ 00:15:24.920 { 00:15:24.920 "name": "BaseBdev1", 00:15:24.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.920 "is_configured": false, 00:15:24.920 "data_offset": 0, 00:15:24.920 "data_size": 0 00:15:24.920 }, 00:15:24.920 { 00:15:24.920 "name": "BaseBdev2", 00:15:24.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.920 "is_configured": false, 00:15:24.920 "data_offset": 0, 00:15:24.920 "data_size": 0 00:15:24.920 }, 00:15:24.920 { 00:15:24.920 "name": "BaseBdev3", 00:15:24.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.920 "is_configured": false, 00:15:24.920 "data_offset": 0, 00:15:24.920 "data_size": 0 00:15:24.920 } 00:15:24.920 ] 00:15:24.920 }' 00:15:24.921 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.921 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.489 [2024-12-13 08:26:37.607146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.489 [2024-12-13 08:26:37.607234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.489 [2024-12-13 08:26:37.615118] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.489 [2024-12-13 08:26:37.615197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.489 [2024-12-13 08:26:37.615229] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.489 [2024-12-13 08:26:37.615253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.489 [2024-12-13 08:26:37.615282] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.489 [2024-12-13 08:26:37.615305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.489 [2024-12-13 08:26:37.660958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.489 BaseBdev1 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.489 [ 00:15:25.489 { 00:15:25.489 "name": "BaseBdev1", 00:15:25.489 "aliases": [ 00:15:25.489 "2a6fb0ec-ac4e-4375-848d-440e58e2a934" 00:15:25.489 ], 00:15:25.489 "product_name": "Malloc disk", 00:15:25.489 "block_size": 512, 00:15:25.489 "num_blocks": 65536, 00:15:25.489 "uuid": "2a6fb0ec-ac4e-4375-848d-440e58e2a934", 00:15:25.489 "assigned_rate_limits": { 00:15:25.489 "rw_ios_per_sec": 0, 00:15:25.489 "rw_mbytes_per_sec": 0, 00:15:25.489 "r_mbytes_per_sec": 0, 00:15:25.489 "w_mbytes_per_sec": 0 00:15:25.489 }, 00:15:25.489 "claimed": true, 00:15:25.489 "claim_type": "exclusive_write", 00:15:25.489 "zoned": false, 00:15:25.489 "supported_io_types": { 00:15:25.489 "read": true, 00:15:25.489 "write": true, 00:15:25.489 "unmap": true, 00:15:25.489 "flush": true, 00:15:25.489 "reset": true, 00:15:25.489 "nvme_admin": false, 00:15:25.489 "nvme_io": false, 00:15:25.489 "nvme_io_md": false, 00:15:25.489 "write_zeroes": true, 00:15:25.489 "zcopy": true, 00:15:25.489 "get_zone_info": false, 00:15:25.489 "zone_management": false, 00:15:25.489 "zone_append": false, 00:15:25.489 "compare": false, 00:15:25.489 "compare_and_write": false, 00:15:25.489 "abort": true, 00:15:25.489 "seek_hole": false, 00:15:25.489 "seek_data": false, 00:15:25.489 "copy": true, 00:15:25.489 "nvme_iov_md": false 00:15:25.489 }, 00:15:25.489 "memory_domains": [ 00:15:25.489 { 00:15:25.489 "dma_device_id": "system", 00:15:25.489 "dma_device_type": 1 00:15:25.489 }, 00:15:25.489 { 00:15:25.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.489 "dma_device_type": 2 00:15:25.489 } 00:15:25.489 ], 00:15:25.489 "driver_specific": {} 00:15:25.489 } 00:15:25.489 ] 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.489 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.490 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.490 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.490 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.490 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.490 "name": "Existed_Raid", 00:15:25.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.490 "strip_size_kb": 64, 00:15:25.490 "state": "configuring", 00:15:25.490 "raid_level": "raid5f", 00:15:25.490 "superblock": false, 00:15:25.490 "num_base_bdevs": 3, 00:15:25.490 "num_base_bdevs_discovered": 1, 00:15:25.490 "num_base_bdevs_operational": 3, 00:15:25.490 "base_bdevs_list": [ 00:15:25.490 { 00:15:25.490 "name": "BaseBdev1", 00:15:25.490 "uuid": "2a6fb0ec-ac4e-4375-848d-440e58e2a934", 00:15:25.490 "is_configured": true, 00:15:25.490 "data_offset": 0, 00:15:25.490 "data_size": 65536 00:15:25.490 }, 00:15:25.490 { 00:15:25.490 "name": "BaseBdev2", 00:15:25.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.490 "is_configured": false, 00:15:25.490 "data_offset": 0, 00:15:25.490 "data_size": 0 00:15:25.490 }, 00:15:25.490 { 00:15:25.490 "name": "BaseBdev3", 00:15:25.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.490 "is_configured": false, 00:15:25.490 "data_offset": 0, 00:15:25.490 "data_size": 0 00:15:25.490 } 00:15:25.490 ] 00:15:25.490 }' 00:15:25.490 08:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.490 08:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.058 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.059 [2024-12-13 08:26:38.156207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.059 [2024-12-13 08:26:38.156310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.059 [2024-12-13 08:26:38.164240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.059 [2024-12-13 08:26:38.166225] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.059 [2024-12-13 08:26:38.166300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.059 [2024-12-13 08:26:38.166328] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.059 [2024-12-13 08:26:38.166370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.059 "name": "Existed_Raid", 00:15:26.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.059 "strip_size_kb": 64, 00:15:26.059 "state": "configuring", 00:15:26.059 "raid_level": "raid5f", 00:15:26.059 "superblock": false, 00:15:26.059 "num_base_bdevs": 3, 00:15:26.059 "num_base_bdevs_discovered": 1, 00:15:26.059 "num_base_bdevs_operational": 3, 00:15:26.059 "base_bdevs_list": [ 00:15:26.059 { 00:15:26.059 "name": "BaseBdev1", 00:15:26.059 "uuid": "2a6fb0ec-ac4e-4375-848d-440e58e2a934", 00:15:26.059 "is_configured": true, 00:15:26.059 "data_offset": 0, 00:15:26.059 "data_size": 65536 00:15:26.059 }, 00:15:26.059 { 00:15:26.059 "name": "BaseBdev2", 00:15:26.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.059 "is_configured": false, 00:15:26.059 "data_offset": 0, 00:15:26.059 "data_size": 0 00:15:26.059 }, 00:15:26.059 { 00:15:26.059 "name": "BaseBdev3", 00:15:26.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.059 "is_configured": false, 00:15:26.059 "data_offset": 0, 00:15:26.059 "data_size": 0 00:15:26.059 } 00:15:26.059 ] 00:15:26.059 }' 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.059 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.318 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.318 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.318 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.578 [2024-12-13 08:26:38.716052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.578 BaseBdev2 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.578 [ 00:15:26.578 { 00:15:26.578 "name": "BaseBdev2", 00:15:26.578 "aliases": [ 00:15:26.578 "b6a7114a-345c-4ac1-a614-1acf35c7dc69" 00:15:26.578 ], 00:15:26.578 "product_name": "Malloc disk", 00:15:26.578 "block_size": 512, 00:15:26.578 "num_blocks": 65536, 00:15:26.578 "uuid": "b6a7114a-345c-4ac1-a614-1acf35c7dc69", 00:15:26.578 "assigned_rate_limits": { 00:15:26.578 "rw_ios_per_sec": 0, 00:15:26.578 "rw_mbytes_per_sec": 0, 00:15:26.578 "r_mbytes_per_sec": 0, 00:15:26.578 "w_mbytes_per_sec": 0 00:15:26.578 }, 00:15:26.578 "claimed": true, 00:15:26.578 "claim_type": "exclusive_write", 00:15:26.578 "zoned": false, 00:15:26.578 "supported_io_types": { 00:15:26.578 "read": true, 00:15:26.578 "write": true, 00:15:26.578 "unmap": true, 00:15:26.578 "flush": true, 00:15:26.578 "reset": true, 00:15:26.578 "nvme_admin": false, 00:15:26.578 "nvme_io": false, 00:15:26.578 "nvme_io_md": false, 00:15:26.578 "write_zeroes": true, 00:15:26.578 "zcopy": true, 00:15:26.578 "get_zone_info": false, 00:15:26.578 "zone_management": false, 00:15:26.578 "zone_append": false, 00:15:26.578 "compare": false, 00:15:26.578 "compare_and_write": false, 00:15:26.578 "abort": true, 00:15:26.578 "seek_hole": false, 00:15:26.578 "seek_data": false, 00:15:26.578 "copy": true, 00:15:26.578 "nvme_iov_md": false 00:15:26.578 }, 00:15:26.578 "memory_domains": [ 00:15:26.578 { 00:15:26.578 "dma_device_id": "system", 00:15:26.578 "dma_device_type": 1 00:15:26.578 }, 00:15:26.578 { 00:15:26.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.578 "dma_device_type": 2 00:15:26.578 } 00:15:26.578 ], 00:15:26.578 "driver_specific": {} 00:15:26.578 } 00:15:26.578 ] 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.578 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.578 "name": "Existed_Raid", 00:15:26.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.578 "strip_size_kb": 64, 00:15:26.578 "state": "configuring", 00:15:26.578 "raid_level": "raid5f", 00:15:26.578 "superblock": false, 00:15:26.578 "num_base_bdevs": 3, 00:15:26.578 "num_base_bdevs_discovered": 2, 00:15:26.578 "num_base_bdevs_operational": 3, 00:15:26.578 "base_bdevs_list": [ 00:15:26.578 { 00:15:26.578 "name": "BaseBdev1", 00:15:26.578 "uuid": "2a6fb0ec-ac4e-4375-848d-440e58e2a934", 00:15:26.578 "is_configured": true, 00:15:26.578 "data_offset": 0, 00:15:26.578 "data_size": 65536 00:15:26.578 }, 00:15:26.578 { 00:15:26.578 "name": "BaseBdev2", 00:15:26.579 "uuid": "b6a7114a-345c-4ac1-a614-1acf35c7dc69", 00:15:26.579 "is_configured": true, 00:15:26.579 "data_offset": 0, 00:15:26.579 "data_size": 65536 00:15:26.579 }, 00:15:26.579 { 00:15:26.579 "name": "BaseBdev3", 00:15:26.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.579 "is_configured": false, 00:15:26.579 "data_offset": 0, 00:15:26.579 "data_size": 0 00:15:26.579 } 00:15:26.579 ] 00:15:26.579 }' 00:15:26.579 08:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.579 08:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.148 [2024-12-13 08:26:39.269469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.148 [2024-12-13 08:26:39.269645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.148 [2024-12-13 08:26:39.269683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:27.148 [2024-12-13 08:26:39.269971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:27.148 [2024-12-13 08:26:39.275570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.148 [2024-12-13 08:26:39.275629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:27.148 [2024-12-13 08:26:39.275947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.148 BaseBdev3 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.148 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.148 [ 00:15:27.148 { 00:15:27.148 "name": "BaseBdev3", 00:15:27.148 "aliases": [ 00:15:27.148 "93ec12c0-8474-424c-b0e0-99e582ff03cc" 00:15:27.148 ], 00:15:27.148 "product_name": "Malloc disk", 00:15:27.148 "block_size": 512, 00:15:27.148 "num_blocks": 65536, 00:15:27.148 "uuid": "93ec12c0-8474-424c-b0e0-99e582ff03cc", 00:15:27.148 "assigned_rate_limits": { 00:15:27.148 "rw_ios_per_sec": 0, 00:15:27.148 "rw_mbytes_per_sec": 0, 00:15:27.148 "r_mbytes_per_sec": 0, 00:15:27.148 "w_mbytes_per_sec": 0 00:15:27.148 }, 00:15:27.148 "claimed": true, 00:15:27.148 "claim_type": "exclusive_write", 00:15:27.148 "zoned": false, 00:15:27.148 "supported_io_types": { 00:15:27.148 "read": true, 00:15:27.148 "write": true, 00:15:27.148 "unmap": true, 00:15:27.149 "flush": true, 00:15:27.149 "reset": true, 00:15:27.149 "nvme_admin": false, 00:15:27.149 "nvme_io": false, 00:15:27.149 "nvme_io_md": false, 00:15:27.149 "write_zeroes": true, 00:15:27.149 "zcopy": true, 00:15:27.149 "get_zone_info": false, 00:15:27.149 "zone_management": false, 00:15:27.149 "zone_append": false, 00:15:27.149 "compare": false, 00:15:27.149 "compare_and_write": false, 00:15:27.149 "abort": true, 00:15:27.149 "seek_hole": false, 00:15:27.149 "seek_data": false, 00:15:27.149 "copy": true, 00:15:27.149 "nvme_iov_md": false 00:15:27.149 }, 00:15:27.149 "memory_domains": [ 00:15:27.149 { 00:15:27.149 "dma_device_id": "system", 00:15:27.149 "dma_device_type": 1 00:15:27.149 }, 00:15:27.149 { 00:15:27.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.149 "dma_device_type": 2 00:15:27.149 } 00:15:27.149 ], 00:15:27.149 "driver_specific": {} 00:15:27.149 } 00:15:27.149 ] 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.149 "name": "Existed_Raid", 00:15:27.149 "uuid": "6ef0dba7-b20d-4ed7-accc-6c81973b6189", 00:15:27.149 "strip_size_kb": 64, 00:15:27.149 "state": "online", 00:15:27.149 "raid_level": "raid5f", 00:15:27.149 "superblock": false, 00:15:27.149 "num_base_bdevs": 3, 00:15:27.149 "num_base_bdevs_discovered": 3, 00:15:27.149 "num_base_bdevs_operational": 3, 00:15:27.149 "base_bdevs_list": [ 00:15:27.149 { 00:15:27.149 "name": "BaseBdev1", 00:15:27.149 "uuid": "2a6fb0ec-ac4e-4375-848d-440e58e2a934", 00:15:27.149 "is_configured": true, 00:15:27.149 "data_offset": 0, 00:15:27.149 "data_size": 65536 00:15:27.149 }, 00:15:27.149 { 00:15:27.149 "name": "BaseBdev2", 00:15:27.149 "uuid": "b6a7114a-345c-4ac1-a614-1acf35c7dc69", 00:15:27.149 "is_configured": true, 00:15:27.149 "data_offset": 0, 00:15:27.149 "data_size": 65536 00:15:27.149 }, 00:15:27.149 { 00:15:27.149 "name": "BaseBdev3", 00:15:27.149 "uuid": "93ec12c0-8474-424c-b0e0-99e582ff03cc", 00:15:27.149 "is_configured": true, 00:15:27.149 "data_offset": 0, 00:15:27.149 "data_size": 65536 00:15:27.149 } 00:15:27.149 ] 00:15:27.149 }' 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.149 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.409 [2024-12-13 08:26:39.733640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.409 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.668 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.668 "name": "Existed_Raid", 00:15:27.668 "aliases": [ 00:15:27.668 "6ef0dba7-b20d-4ed7-accc-6c81973b6189" 00:15:27.668 ], 00:15:27.668 "product_name": "Raid Volume", 00:15:27.668 "block_size": 512, 00:15:27.668 "num_blocks": 131072, 00:15:27.668 "uuid": "6ef0dba7-b20d-4ed7-accc-6c81973b6189", 00:15:27.668 "assigned_rate_limits": { 00:15:27.668 "rw_ios_per_sec": 0, 00:15:27.668 "rw_mbytes_per_sec": 0, 00:15:27.668 "r_mbytes_per_sec": 0, 00:15:27.668 "w_mbytes_per_sec": 0 00:15:27.668 }, 00:15:27.668 "claimed": false, 00:15:27.668 "zoned": false, 00:15:27.668 "supported_io_types": { 00:15:27.668 "read": true, 00:15:27.668 "write": true, 00:15:27.668 "unmap": false, 00:15:27.668 "flush": false, 00:15:27.668 "reset": true, 00:15:27.668 "nvme_admin": false, 00:15:27.668 "nvme_io": false, 00:15:27.668 "nvme_io_md": false, 00:15:27.668 "write_zeroes": true, 00:15:27.668 "zcopy": false, 00:15:27.668 "get_zone_info": false, 00:15:27.668 "zone_management": false, 00:15:27.668 "zone_append": false, 00:15:27.668 "compare": false, 00:15:27.668 "compare_and_write": false, 00:15:27.668 "abort": false, 00:15:27.668 "seek_hole": false, 00:15:27.668 "seek_data": false, 00:15:27.668 "copy": false, 00:15:27.668 "nvme_iov_md": false 00:15:27.668 }, 00:15:27.668 "driver_specific": { 00:15:27.668 "raid": { 00:15:27.668 "uuid": "6ef0dba7-b20d-4ed7-accc-6c81973b6189", 00:15:27.668 "strip_size_kb": 64, 00:15:27.668 "state": "online", 00:15:27.668 "raid_level": "raid5f", 00:15:27.668 "superblock": false, 00:15:27.668 "num_base_bdevs": 3, 00:15:27.668 "num_base_bdevs_discovered": 3, 00:15:27.668 "num_base_bdevs_operational": 3, 00:15:27.668 "base_bdevs_list": [ 00:15:27.668 { 00:15:27.668 "name": "BaseBdev1", 00:15:27.668 "uuid": "2a6fb0ec-ac4e-4375-848d-440e58e2a934", 00:15:27.668 "is_configured": true, 00:15:27.668 "data_offset": 0, 00:15:27.668 "data_size": 65536 00:15:27.668 }, 00:15:27.668 { 00:15:27.668 "name": "BaseBdev2", 00:15:27.668 "uuid": "b6a7114a-345c-4ac1-a614-1acf35c7dc69", 00:15:27.668 "is_configured": true, 00:15:27.668 "data_offset": 0, 00:15:27.669 "data_size": 65536 00:15:27.669 }, 00:15:27.669 { 00:15:27.669 "name": "BaseBdev3", 00:15:27.669 "uuid": "93ec12c0-8474-424c-b0e0-99e582ff03cc", 00:15:27.669 "is_configured": true, 00:15:27.669 "data_offset": 0, 00:15:27.669 "data_size": 65536 00:15:27.669 } 00:15:27.669 ] 00:15:27.669 } 00:15:27.669 } 00:15:27.669 }' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:27.669 BaseBdev2 00:15:27.669 BaseBdev3' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.669 08:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.669 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.669 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.669 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.669 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.669 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.669 [2024-12-13 08:26:40.009039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.928 "name": "Existed_Raid", 00:15:27.928 "uuid": "6ef0dba7-b20d-4ed7-accc-6c81973b6189", 00:15:27.928 "strip_size_kb": 64, 00:15:27.928 "state": "online", 00:15:27.928 "raid_level": "raid5f", 00:15:27.928 "superblock": false, 00:15:27.928 "num_base_bdevs": 3, 00:15:27.928 "num_base_bdevs_discovered": 2, 00:15:27.928 "num_base_bdevs_operational": 2, 00:15:27.928 "base_bdevs_list": [ 00:15:27.928 { 00:15:27.928 "name": null, 00:15:27.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.928 "is_configured": false, 00:15:27.928 "data_offset": 0, 00:15:27.928 "data_size": 65536 00:15:27.928 }, 00:15:27.928 { 00:15:27.928 "name": "BaseBdev2", 00:15:27.928 "uuid": "b6a7114a-345c-4ac1-a614-1acf35c7dc69", 00:15:27.928 "is_configured": true, 00:15:27.928 "data_offset": 0, 00:15:27.928 "data_size": 65536 00:15:27.928 }, 00:15:27.928 { 00:15:27.928 "name": "BaseBdev3", 00:15:27.928 "uuid": "93ec12c0-8474-424c-b0e0-99e582ff03cc", 00:15:27.928 "is_configured": true, 00:15:27.928 "data_offset": 0, 00:15:27.928 "data_size": 65536 00:15:27.928 } 00:15:27.928 ] 00:15:27.928 }' 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.928 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.496 [2024-12-13 08:26:40.623991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.496 [2024-12-13 08:26:40.624154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.496 [2024-12-13 08:26:40.723254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.496 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.496 [2024-12-13 08:26:40.783256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.496 [2024-12-13 08:26:40.783355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 BaseBdev2 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.756 08:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 [ 00:15:28.756 { 00:15:28.756 "name": "BaseBdev2", 00:15:28.756 "aliases": [ 00:15:28.756 "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7" 00:15:28.756 ], 00:15:28.756 "product_name": "Malloc disk", 00:15:28.756 "block_size": 512, 00:15:28.756 "num_blocks": 65536, 00:15:28.756 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:28.756 "assigned_rate_limits": { 00:15:28.756 "rw_ios_per_sec": 0, 00:15:28.756 "rw_mbytes_per_sec": 0, 00:15:28.756 "r_mbytes_per_sec": 0, 00:15:28.756 "w_mbytes_per_sec": 0 00:15:28.756 }, 00:15:28.756 "claimed": false, 00:15:28.756 "zoned": false, 00:15:28.756 "supported_io_types": { 00:15:28.756 "read": true, 00:15:28.756 "write": true, 00:15:28.756 "unmap": true, 00:15:28.756 "flush": true, 00:15:28.756 "reset": true, 00:15:28.756 "nvme_admin": false, 00:15:28.756 "nvme_io": false, 00:15:28.756 "nvme_io_md": false, 00:15:28.756 "write_zeroes": true, 00:15:28.756 "zcopy": true, 00:15:28.756 "get_zone_info": false, 00:15:28.756 "zone_management": false, 00:15:28.756 "zone_append": false, 00:15:28.756 "compare": false, 00:15:28.756 "compare_and_write": false, 00:15:28.756 "abort": true, 00:15:28.756 "seek_hole": false, 00:15:28.756 "seek_data": false, 00:15:28.756 "copy": true, 00:15:28.756 "nvme_iov_md": false 00:15:28.756 }, 00:15:28.756 "memory_domains": [ 00:15:28.756 { 00:15:28.756 "dma_device_id": "system", 00:15:28.756 "dma_device_type": 1 00:15:28.756 }, 00:15:28.756 { 00:15:28.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.756 "dma_device_type": 2 00:15:28.756 } 00:15:28.756 ], 00:15:28.756 "driver_specific": {} 00:15:28.756 } 00:15:28.756 ] 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 BaseBdev3 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.756 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 [ 00:15:28.756 { 00:15:28.756 "name": "BaseBdev3", 00:15:28.756 "aliases": [ 00:15:28.756 "6e4571b0-210f-4fce-b00b-b25a77692424" 00:15:28.756 ], 00:15:28.756 "product_name": "Malloc disk", 00:15:28.756 "block_size": 512, 00:15:28.756 "num_blocks": 65536, 00:15:28.756 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:28.756 "assigned_rate_limits": { 00:15:28.756 "rw_ios_per_sec": 0, 00:15:28.756 "rw_mbytes_per_sec": 0, 00:15:28.756 "r_mbytes_per_sec": 0, 00:15:28.756 "w_mbytes_per_sec": 0 00:15:28.756 }, 00:15:28.756 "claimed": false, 00:15:28.756 "zoned": false, 00:15:28.756 "supported_io_types": { 00:15:28.756 "read": true, 00:15:28.756 "write": true, 00:15:28.756 "unmap": true, 00:15:28.756 "flush": true, 00:15:28.756 "reset": true, 00:15:28.756 "nvme_admin": false, 00:15:28.756 "nvme_io": false, 00:15:28.756 "nvme_io_md": false, 00:15:28.756 "write_zeroes": true, 00:15:28.756 "zcopy": true, 00:15:28.756 "get_zone_info": false, 00:15:28.756 "zone_management": false, 00:15:28.756 "zone_append": false, 00:15:28.756 "compare": false, 00:15:28.756 "compare_and_write": false, 00:15:28.756 "abort": true, 00:15:28.756 "seek_hole": false, 00:15:28.756 "seek_data": false, 00:15:28.757 "copy": true, 00:15:28.757 "nvme_iov_md": false 00:15:28.757 }, 00:15:28.757 "memory_domains": [ 00:15:28.757 { 00:15:28.757 "dma_device_id": "system", 00:15:28.757 "dma_device_type": 1 00:15:28.757 }, 00:15:28.757 { 00:15:28.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.757 "dma_device_type": 2 00:15:28.757 } 00:15:28.757 ], 00:15:28.757 "driver_specific": {} 00:15:28.757 } 00:15:28.757 ] 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.757 [2024-12-13 08:26:41.090614] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.757 [2024-12-13 08:26:41.090722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.757 [2024-12-13 08:26:41.090771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.757 [2024-12-13 08:26:41.092799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.757 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.016 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.016 "name": "Existed_Raid", 00:15:29.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.016 "strip_size_kb": 64, 00:15:29.016 "state": "configuring", 00:15:29.016 "raid_level": "raid5f", 00:15:29.016 "superblock": false, 00:15:29.016 "num_base_bdevs": 3, 00:15:29.016 "num_base_bdevs_discovered": 2, 00:15:29.016 "num_base_bdevs_operational": 3, 00:15:29.016 "base_bdevs_list": [ 00:15:29.016 { 00:15:29.016 "name": "BaseBdev1", 00:15:29.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.016 "is_configured": false, 00:15:29.016 "data_offset": 0, 00:15:29.016 "data_size": 0 00:15:29.016 }, 00:15:29.016 { 00:15:29.016 "name": "BaseBdev2", 00:15:29.016 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:29.016 "is_configured": true, 00:15:29.016 "data_offset": 0, 00:15:29.016 "data_size": 65536 00:15:29.016 }, 00:15:29.016 { 00:15:29.016 "name": "BaseBdev3", 00:15:29.016 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:29.016 "is_configured": true, 00:15:29.016 "data_offset": 0, 00:15:29.016 "data_size": 65536 00:15:29.016 } 00:15:29.016 ] 00:15:29.016 }' 00:15:29.016 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.016 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.276 [2024-12-13 08:26:41.557865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.276 "name": "Existed_Raid", 00:15:29.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.276 "strip_size_kb": 64, 00:15:29.276 "state": "configuring", 00:15:29.276 "raid_level": "raid5f", 00:15:29.276 "superblock": false, 00:15:29.276 "num_base_bdevs": 3, 00:15:29.276 "num_base_bdevs_discovered": 1, 00:15:29.276 "num_base_bdevs_operational": 3, 00:15:29.276 "base_bdevs_list": [ 00:15:29.276 { 00:15:29.276 "name": "BaseBdev1", 00:15:29.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.276 "is_configured": false, 00:15:29.276 "data_offset": 0, 00:15:29.276 "data_size": 0 00:15:29.276 }, 00:15:29.276 { 00:15:29.276 "name": null, 00:15:29.276 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:29.276 "is_configured": false, 00:15:29.276 "data_offset": 0, 00:15:29.276 "data_size": 65536 00:15:29.276 }, 00:15:29.276 { 00:15:29.276 "name": "BaseBdev3", 00:15:29.276 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:29.276 "is_configured": true, 00:15:29.276 "data_offset": 0, 00:15:29.276 "data_size": 65536 00:15:29.276 } 00:15:29.276 ] 00:15:29.276 }' 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.276 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.845 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.845 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.845 08:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.845 08:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.845 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.845 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:29.845 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.845 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.845 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.845 [2024-12-13 08:26:42.079232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.845 BaseBdev1 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.846 [ 00:15:29.846 { 00:15:29.846 "name": "BaseBdev1", 00:15:29.846 "aliases": [ 00:15:29.846 "bcfaa58b-b501-4021-84ba-72ae9c2dff19" 00:15:29.846 ], 00:15:29.846 "product_name": "Malloc disk", 00:15:29.846 "block_size": 512, 00:15:29.846 "num_blocks": 65536, 00:15:29.846 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:29.846 "assigned_rate_limits": { 00:15:29.846 "rw_ios_per_sec": 0, 00:15:29.846 "rw_mbytes_per_sec": 0, 00:15:29.846 "r_mbytes_per_sec": 0, 00:15:29.846 "w_mbytes_per_sec": 0 00:15:29.846 }, 00:15:29.846 "claimed": true, 00:15:29.846 "claim_type": "exclusive_write", 00:15:29.846 "zoned": false, 00:15:29.846 "supported_io_types": { 00:15:29.846 "read": true, 00:15:29.846 "write": true, 00:15:29.846 "unmap": true, 00:15:29.846 "flush": true, 00:15:29.846 "reset": true, 00:15:29.846 "nvme_admin": false, 00:15:29.846 "nvme_io": false, 00:15:29.846 "nvme_io_md": false, 00:15:29.846 "write_zeroes": true, 00:15:29.846 "zcopy": true, 00:15:29.846 "get_zone_info": false, 00:15:29.846 "zone_management": false, 00:15:29.846 "zone_append": false, 00:15:29.846 "compare": false, 00:15:29.846 "compare_and_write": false, 00:15:29.846 "abort": true, 00:15:29.846 "seek_hole": false, 00:15:29.846 "seek_data": false, 00:15:29.846 "copy": true, 00:15:29.846 "nvme_iov_md": false 00:15:29.846 }, 00:15:29.846 "memory_domains": [ 00:15:29.846 { 00:15:29.846 "dma_device_id": "system", 00:15:29.846 "dma_device_type": 1 00:15:29.846 }, 00:15:29.846 { 00:15:29.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.846 "dma_device_type": 2 00:15:29.846 } 00:15:29.846 ], 00:15:29.846 "driver_specific": {} 00:15:29.846 } 00:15:29.846 ] 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.846 "name": "Existed_Raid", 00:15:29.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.846 "strip_size_kb": 64, 00:15:29.846 "state": "configuring", 00:15:29.846 "raid_level": "raid5f", 00:15:29.846 "superblock": false, 00:15:29.846 "num_base_bdevs": 3, 00:15:29.846 "num_base_bdevs_discovered": 2, 00:15:29.846 "num_base_bdevs_operational": 3, 00:15:29.846 "base_bdevs_list": [ 00:15:29.846 { 00:15:29.846 "name": "BaseBdev1", 00:15:29.846 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:29.846 "is_configured": true, 00:15:29.846 "data_offset": 0, 00:15:29.846 "data_size": 65536 00:15:29.846 }, 00:15:29.846 { 00:15:29.846 "name": null, 00:15:29.846 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:29.846 "is_configured": false, 00:15:29.846 "data_offset": 0, 00:15:29.846 "data_size": 65536 00:15:29.846 }, 00:15:29.846 { 00:15:29.846 "name": "BaseBdev3", 00:15:29.846 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:29.846 "is_configured": true, 00:15:29.846 "data_offset": 0, 00:15:29.846 "data_size": 65536 00:15:29.846 } 00:15:29.846 ] 00:15:29.846 }' 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.846 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.415 [2024-12-13 08:26:42.610408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.415 "name": "Existed_Raid", 00:15:30.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.415 "strip_size_kb": 64, 00:15:30.415 "state": "configuring", 00:15:30.415 "raid_level": "raid5f", 00:15:30.415 "superblock": false, 00:15:30.415 "num_base_bdevs": 3, 00:15:30.415 "num_base_bdevs_discovered": 1, 00:15:30.415 "num_base_bdevs_operational": 3, 00:15:30.415 "base_bdevs_list": [ 00:15:30.415 { 00:15:30.415 "name": "BaseBdev1", 00:15:30.415 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:30.415 "is_configured": true, 00:15:30.415 "data_offset": 0, 00:15:30.415 "data_size": 65536 00:15:30.415 }, 00:15:30.415 { 00:15:30.415 "name": null, 00:15:30.415 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:30.415 "is_configured": false, 00:15:30.415 "data_offset": 0, 00:15:30.415 "data_size": 65536 00:15:30.415 }, 00:15:30.415 { 00:15:30.415 "name": null, 00:15:30.415 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:30.415 "is_configured": false, 00:15:30.415 "data_offset": 0, 00:15:30.415 "data_size": 65536 00:15:30.415 } 00:15:30.415 ] 00:15:30.415 }' 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.415 08:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.984 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.985 [2024-12-13 08:26:43.137522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.985 "name": "Existed_Raid", 00:15:30.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.985 "strip_size_kb": 64, 00:15:30.985 "state": "configuring", 00:15:30.985 "raid_level": "raid5f", 00:15:30.985 "superblock": false, 00:15:30.985 "num_base_bdevs": 3, 00:15:30.985 "num_base_bdevs_discovered": 2, 00:15:30.985 "num_base_bdevs_operational": 3, 00:15:30.985 "base_bdevs_list": [ 00:15:30.985 { 00:15:30.985 "name": "BaseBdev1", 00:15:30.985 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:30.985 "is_configured": true, 00:15:30.985 "data_offset": 0, 00:15:30.985 "data_size": 65536 00:15:30.985 }, 00:15:30.985 { 00:15:30.985 "name": null, 00:15:30.985 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:30.985 "is_configured": false, 00:15:30.985 "data_offset": 0, 00:15:30.985 "data_size": 65536 00:15:30.985 }, 00:15:30.985 { 00:15:30.985 "name": "BaseBdev3", 00:15:30.985 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:30.985 "is_configured": true, 00:15:30.985 "data_offset": 0, 00:15:30.985 "data_size": 65536 00:15:30.985 } 00:15:30.985 ] 00:15:30.985 }' 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.985 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.244 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.244 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.244 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.244 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.245 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.245 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:31.245 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.245 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.245 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.245 [2024-12-13 08:26:43.600735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.504 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.505 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.505 "name": "Existed_Raid", 00:15:31.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.505 "strip_size_kb": 64, 00:15:31.505 "state": "configuring", 00:15:31.505 "raid_level": "raid5f", 00:15:31.505 "superblock": false, 00:15:31.505 "num_base_bdevs": 3, 00:15:31.505 "num_base_bdevs_discovered": 1, 00:15:31.505 "num_base_bdevs_operational": 3, 00:15:31.505 "base_bdevs_list": [ 00:15:31.505 { 00:15:31.505 "name": null, 00:15:31.505 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:31.505 "is_configured": false, 00:15:31.505 "data_offset": 0, 00:15:31.505 "data_size": 65536 00:15:31.505 }, 00:15:31.505 { 00:15:31.505 "name": null, 00:15:31.505 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:31.505 "is_configured": false, 00:15:31.505 "data_offset": 0, 00:15:31.505 "data_size": 65536 00:15:31.505 }, 00:15:31.505 { 00:15:31.505 "name": "BaseBdev3", 00:15:31.505 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:31.505 "is_configured": true, 00:15:31.505 "data_offset": 0, 00:15:31.505 "data_size": 65536 00:15:31.505 } 00:15:31.505 ] 00:15:31.505 }' 00:15:31.505 08:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.505 08:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.101 [2024-12-13 08:26:44.255208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.101 "name": "Existed_Raid", 00:15:32.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.101 "strip_size_kb": 64, 00:15:32.101 "state": "configuring", 00:15:32.101 "raid_level": "raid5f", 00:15:32.101 "superblock": false, 00:15:32.101 "num_base_bdevs": 3, 00:15:32.101 "num_base_bdevs_discovered": 2, 00:15:32.101 "num_base_bdevs_operational": 3, 00:15:32.101 "base_bdevs_list": [ 00:15:32.101 { 00:15:32.101 "name": null, 00:15:32.101 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:32.101 "is_configured": false, 00:15:32.101 "data_offset": 0, 00:15:32.101 "data_size": 65536 00:15:32.101 }, 00:15:32.101 { 00:15:32.101 "name": "BaseBdev2", 00:15:32.101 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:32.101 "is_configured": true, 00:15:32.101 "data_offset": 0, 00:15:32.101 "data_size": 65536 00:15:32.101 }, 00:15:32.101 { 00:15:32.101 "name": "BaseBdev3", 00:15:32.101 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:32.101 "is_configured": true, 00:15:32.101 "data_offset": 0, 00:15:32.101 "data_size": 65536 00:15:32.101 } 00:15:32.101 ] 00:15:32.101 }' 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.101 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.361 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.361 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.361 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.361 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.361 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.361 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:32.361 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bcfaa58b-b501-4021-84ba-72ae9c2dff19 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.621 [2024-12-13 08:26:44.815680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.621 [2024-12-13 08:26:44.815837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.621 [2024-12-13 08:26:44.815867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:32.621 [2024-12-13 08:26:44.816162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:32.621 [2024-12-13 08:26:44.821680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.621 [2024-12-13 08:26:44.821741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:32.621 [2024-12-13 08:26:44.822073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.621 NewBaseBdev 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.621 [ 00:15:32.621 { 00:15:32.621 "name": "NewBaseBdev", 00:15:32.621 "aliases": [ 00:15:32.621 "bcfaa58b-b501-4021-84ba-72ae9c2dff19" 00:15:32.621 ], 00:15:32.621 "product_name": "Malloc disk", 00:15:32.621 "block_size": 512, 00:15:32.621 "num_blocks": 65536, 00:15:32.621 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:32.621 "assigned_rate_limits": { 00:15:32.621 "rw_ios_per_sec": 0, 00:15:32.621 "rw_mbytes_per_sec": 0, 00:15:32.621 "r_mbytes_per_sec": 0, 00:15:32.621 "w_mbytes_per_sec": 0 00:15:32.621 }, 00:15:32.621 "claimed": true, 00:15:32.621 "claim_type": "exclusive_write", 00:15:32.621 "zoned": false, 00:15:32.621 "supported_io_types": { 00:15:32.621 "read": true, 00:15:32.621 "write": true, 00:15:32.621 "unmap": true, 00:15:32.621 "flush": true, 00:15:32.621 "reset": true, 00:15:32.621 "nvme_admin": false, 00:15:32.621 "nvme_io": false, 00:15:32.621 "nvme_io_md": false, 00:15:32.621 "write_zeroes": true, 00:15:32.621 "zcopy": true, 00:15:32.621 "get_zone_info": false, 00:15:32.621 "zone_management": false, 00:15:32.621 "zone_append": false, 00:15:32.621 "compare": false, 00:15:32.621 "compare_and_write": false, 00:15:32.621 "abort": true, 00:15:32.621 "seek_hole": false, 00:15:32.621 "seek_data": false, 00:15:32.621 "copy": true, 00:15:32.621 "nvme_iov_md": false 00:15:32.621 }, 00:15:32.621 "memory_domains": [ 00:15:32.621 { 00:15:32.621 "dma_device_id": "system", 00:15:32.621 "dma_device_type": 1 00:15:32.621 }, 00:15:32.621 { 00:15:32.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.621 "dma_device_type": 2 00:15:32.621 } 00:15:32.621 ], 00:15:32.621 "driver_specific": {} 00:15:32.621 } 00:15:32.621 ] 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.621 "name": "Existed_Raid", 00:15:32.621 "uuid": "39f3a4be-2698-4db4-8fcf-12facdc7cf14", 00:15:32.621 "strip_size_kb": 64, 00:15:32.621 "state": "online", 00:15:32.621 "raid_level": "raid5f", 00:15:32.621 "superblock": false, 00:15:32.621 "num_base_bdevs": 3, 00:15:32.621 "num_base_bdevs_discovered": 3, 00:15:32.621 "num_base_bdevs_operational": 3, 00:15:32.621 "base_bdevs_list": [ 00:15:32.621 { 00:15:32.621 "name": "NewBaseBdev", 00:15:32.621 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:32.621 "is_configured": true, 00:15:32.621 "data_offset": 0, 00:15:32.621 "data_size": 65536 00:15:32.621 }, 00:15:32.621 { 00:15:32.621 "name": "BaseBdev2", 00:15:32.621 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:32.621 "is_configured": true, 00:15:32.621 "data_offset": 0, 00:15:32.621 "data_size": 65536 00:15:32.621 }, 00:15:32.621 { 00:15:32.621 "name": "BaseBdev3", 00:15:32.621 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:32.621 "is_configured": true, 00:15:32.621 "data_offset": 0, 00:15:32.621 "data_size": 65536 00:15:32.621 } 00:15:32.621 ] 00:15:32.621 }' 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.621 08:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 [2024-12-13 08:26:45.312113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.193 "name": "Existed_Raid", 00:15:33.193 "aliases": [ 00:15:33.193 "39f3a4be-2698-4db4-8fcf-12facdc7cf14" 00:15:33.193 ], 00:15:33.193 "product_name": "Raid Volume", 00:15:33.193 "block_size": 512, 00:15:33.193 "num_blocks": 131072, 00:15:33.193 "uuid": "39f3a4be-2698-4db4-8fcf-12facdc7cf14", 00:15:33.193 "assigned_rate_limits": { 00:15:33.193 "rw_ios_per_sec": 0, 00:15:33.193 "rw_mbytes_per_sec": 0, 00:15:33.193 "r_mbytes_per_sec": 0, 00:15:33.193 "w_mbytes_per_sec": 0 00:15:33.193 }, 00:15:33.193 "claimed": false, 00:15:33.193 "zoned": false, 00:15:33.193 "supported_io_types": { 00:15:33.193 "read": true, 00:15:33.193 "write": true, 00:15:33.193 "unmap": false, 00:15:33.193 "flush": false, 00:15:33.193 "reset": true, 00:15:33.193 "nvme_admin": false, 00:15:33.193 "nvme_io": false, 00:15:33.193 "nvme_io_md": false, 00:15:33.193 "write_zeroes": true, 00:15:33.193 "zcopy": false, 00:15:33.193 "get_zone_info": false, 00:15:33.193 "zone_management": false, 00:15:33.193 "zone_append": false, 00:15:33.193 "compare": false, 00:15:33.193 "compare_and_write": false, 00:15:33.193 "abort": false, 00:15:33.193 "seek_hole": false, 00:15:33.193 "seek_data": false, 00:15:33.193 "copy": false, 00:15:33.193 "nvme_iov_md": false 00:15:33.193 }, 00:15:33.193 "driver_specific": { 00:15:33.193 "raid": { 00:15:33.193 "uuid": "39f3a4be-2698-4db4-8fcf-12facdc7cf14", 00:15:33.193 "strip_size_kb": 64, 00:15:33.193 "state": "online", 00:15:33.193 "raid_level": "raid5f", 00:15:33.193 "superblock": false, 00:15:33.193 "num_base_bdevs": 3, 00:15:33.193 "num_base_bdevs_discovered": 3, 00:15:33.193 "num_base_bdevs_operational": 3, 00:15:33.193 "base_bdevs_list": [ 00:15:33.193 { 00:15:33.193 "name": "NewBaseBdev", 00:15:33.193 "uuid": "bcfaa58b-b501-4021-84ba-72ae9c2dff19", 00:15:33.193 "is_configured": true, 00:15:33.193 "data_offset": 0, 00:15:33.193 "data_size": 65536 00:15:33.193 }, 00:15:33.193 { 00:15:33.193 "name": "BaseBdev2", 00:15:33.193 "uuid": "e8bea8f8-bd54-437b-95ec-1dae6bbf33f7", 00:15:33.193 "is_configured": true, 00:15:33.193 "data_offset": 0, 00:15:33.193 "data_size": 65536 00:15:33.193 }, 00:15:33.193 { 00:15:33.193 "name": "BaseBdev3", 00:15:33.193 "uuid": "6e4571b0-210f-4fce-b00b-b25a77692424", 00:15:33.193 "is_configured": true, 00:15:33.193 "data_offset": 0, 00:15:33.193 "data_size": 65536 00:15:33.193 } 00:15:33.193 ] 00:15:33.193 } 00:15:33.193 } 00:15:33.193 }' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:33.193 BaseBdev2 00:15:33.193 BaseBdev3' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.193 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.452 [2024-12-13 08:26:45.611409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.452 [2024-12-13 08:26:45.611508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.452 [2024-12-13 08:26:45.611627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.452 [2024-12-13 08:26:45.611970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.452 [2024-12-13 08:26:45.612031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80054 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80054 ']' 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80054 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80054 00:15:33.452 killing process with pid 80054 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80054' 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80054 00:15:33.452 [2024-12-13 08:26:45.658741] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.452 08:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80054 00:15:33.711 [2024-12-13 08:26:45.960265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:35.091 00:15:35.091 real 0m10.900s 00:15:35.091 user 0m17.365s 00:15:35.091 sys 0m2.030s 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.091 ************************************ 00:15:35.091 END TEST raid5f_state_function_test 00:15:35.091 ************************************ 00:15:35.091 08:26:47 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:35.091 08:26:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:35.091 08:26:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.091 08:26:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.091 ************************************ 00:15:35.091 START TEST raid5f_state_function_test_sb 00:15:35.091 ************************************ 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80681 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80681' 00:15:35.091 Process raid pid: 80681 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80681 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80681 ']' 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.091 08:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.091 [2024-12-13 08:26:47.272999] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:15:35.091 [2024-12-13 08:26:47.273238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.091 [2024-12-13 08:26:47.450840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.351 [2024-12-13 08:26:47.570641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.610 [2024-12-13 08:26:47.789031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.610 [2024-12-13 08:26:47.789071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.869 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.869 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:35.869 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.870 [2024-12-13 08:26:48.123119] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.870 [2024-12-13 08:26:48.123176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.870 [2024-12-13 08:26:48.123191] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.870 [2024-12-13 08:26:48.123201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.870 [2024-12-13 08:26:48.123207] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.870 [2024-12-13 08:26:48.123217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.870 "name": "Existed_Raid", 00:15:35.870 "uuid": "89735804-5d32-4828-aae8-31a9b97316fb", 00:15:35.870 "strip_size_kb": 64, 00:15:35.870 "state": "configuring", 00:15:35.870 "raid_level": "raid5f", 00:15:35.870 "superblock": true, 00:15:35.870 "num_base_bdevs": 3, 00:15:35.870 "num_base_bdevs_discovered": 0, 00:15:35.870 "num_base_bdevs_operational": 3, 00:15:35.870 "base_bdevs_list": [ 00:15:35.870 { 00:15:35.870 "name": "BaseBdev1", 00:15:35.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.870 "is_configured": false, 00:15:35.870 "data_offset": 0, 00:15:35.870 "data_size": 0 00:15:35.870 }, 00:15:35.870 { 00:15:35.870 "name": "BaseBdev2", 00:15:35.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.870 "is_configured": false, 00:15:35.870 "data_offset": 0, 00:15:35.870 "data_size": 0 00:15:35.870 }, 00:15:35.870 { 00:15:35.870 "name": "BaseBdev3", 00:15:35.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.870 "is_configured": false, 00:15:35.870 "data_offset": 0, 00:15:35.870 "data_size": 0 00:15:35.870 } 00:15:35.870 ] 00:15:35.870 }' 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.870 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.439 [2024-12-13 08:26:48.558302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.439 [2024-12-13 08:26:48.558408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.439 [2024-12-13 08:26:48.570266] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.439 [2024-12-13 08:26:48.570350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.439 [2024-12-13 08:26:48.570377] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.439 [2024-12-13 08:26:48.570399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.439 [2024-12-13 08:26:48.570417] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.439 [2024-12-13 08:26:48.570439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.439 [2024-12-13 08:26:48.618049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.439 BaseBdev1 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.439 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.439 [ 00:15:36.439 { 00:15:36.439 "name": "BaseBdev1", 00:15:36.439 "aliases": [ 00:15:36.439 "2d06b752-8f00-4e93-b43b-4a28f4ba1317" 00:15:36.439 ], 00:15:36.439 "product_name": "Malloc disk", 00:15:36.439 "block_size": 512, 00:15:36.439 "num_blocks": 65536, 00:15:36.439 "uuid": "2d06b752-8f00-4e93-b43b-4a28f4ba1317", 00:15:36.439 "assigned_rate_limits": { 00:15:36.439 "rw_ios_per_sec": 0, 00:15:36.439 "rw_mbytes_per_sec": 0, 00:15:36.439 "r_mbytes_per_sec": 0, 00:15:36.439 "w_mbytes_per_sec": 0 00:15:36.440 }, 00:15:36.440 "claimed": true, 00:15:36.440 "claim_type": "exclusive_write", 00:15:36.440 "zoned": false, 00:15:36.440 "supported_io_types": { 00:15:36.440 "read": true, 00:15:36.440 "write": true, 00:15:36.440 "unmap": true, 00:15:36.440 "flush": true, 00:15:36.440 "reset": true, 00:15:36.440 "nvme_admin": false, 00:15:36.440 "nvme_io": false, 00:15:36.440 "nvme_io_md": false, 00:15:36.440 "write_zeroes": true, 00:15:36.440 "zcopy": true, 00:15:36.440 "get_zone_info": false, 00:15:36.440 "zone_management": false, 00:15:36.440 "zone_append": false, 00:15:36.440 "compare": false, 00:15:36.440 "compare_and_write": false, 00:15:36.440 "abort": true, 00:15:36.440 "seek_hole": false, 00:15:36.440 "seek_data": false, 00:15:36.440 "copy": true, 00:15:36.440 "nvme_iov_md": false 00:15:36.440 }, 00:15:36.440 "memory_domains": [ 00:15:36.440 { 00:15:36.440 "dma_device_id": "system", 00:15:36.440 "dma_device_type": 1 00:15:36.440 }, 00:15:36.440 { 00:15:36.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.440 "dma_device_type": 2 00:15:36.440 } 00:15:36.440 ], 00:15:36.440 "driver_specific": {} 00:15:36.440 } 00:15:36.440 ] 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.440 "name": "Existed_Raid", 00:15:36.440 "uuid": "77654ede-8c71-4d00-8fd4-89e522cf0b53", 00:15:36.440 "strip_size_kb": 64, 00:15:36.440 "state": "configuring", 00:15:36.440 "raid_level": "raid5f", 00:15:36.440 "superblock": true, 00:15:36.440 "num_base_bdevs": 3, 00:15:36.440 "num_base_bdevs_discovered": 1, 00:15:36.440 "num_base_bdevs_operational": 3, 00:15:36.440 "base_bdevs_list": [ 00:15:36.440 { 00:15:36.440 "name": "BaseBdev1", 00:15:36.440 "uuid": "2d06b752-8f00-4e93-b43b-4a28f4ba1317", 00:15:36.440 "is_configured": true, 00:15:36.440 "data_offset": 2048, 00:15:36.440 "data_size": 63488 00:15:36.440 }, 00:15:36.440 { 00:15:36.440 "name": "BaseBdev2", 00:15:36.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.440 "is_configured": false, 00:15:36.440 "data_offset": 0, 00:15:36.440 "data_size": 0 00:15:36.440 }, 00:15:36.440 { 00:15:36.440 "name": "BaseBdev3", 00:15:36.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.440 "is_configured": false, 00:15:36.440 "data_offset": 0, 00:15:36.440 "data_size": 0 00:15:36.440 } 00:15:36.440 ] 00:15:36.440 }' 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.440 08:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.009 [2024-12-13 08:26:49.133211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.009 [2024-12-13 08:26:49.133311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.009 [2024-12-13 08:26:49.145239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.009 [2024-12-13 08:26:49.147110] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.009 [2024-12-13 08:26:49.147193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.009 [2024-12-13 08:26:49.147224] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.009 [2024-12-13 08:26:49.147247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.009 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.010 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.010 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.010 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.010 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.010 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.010 "name": "Existed_Raid", 00:15:37.010 "uuid": "101c6aec-1f09-4b7a-bc47-f75686f8d5ae", 00:15:37.010 "strip_size_kb": 64, 00:15:37.010 "state": "configuring", 00:15:37.010 "raid_level": "raid5f", 00:15:37.010 "superblock": true, 00:15:37.010 "num_base_bdevs": 3, 00:15:37.010 "num_base_bdevs_discovered": 1, 00:15:37.010 "num_base_bdevs_operational": 3, 00:15:37.010 "base_bdevs_list": [ 00:15:37.010 { 00:15:37.010 "name": "BaseBdev1", 00:15:37.010 "uuid": "2d06b752-8f00-4e93-b43b-4a28f4ba1317", 00:15:37.010 "is_configured": true, 00:15:37.010 "data_offset": 2048, 00:15:37.010 "data_size": 63488 00:15:37.010 }, 00:15:37.010 { 00:15:37.010 "name": "BaseBdev2", 00:15:37.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.010 "is_configured": false, 00:15:37.010 "data_offset": 0, 00:15:37.010 "data_size": 0 00:15:37.010 }, 00:15:37.010 { 00:15:37.010 "name": "BaseBdev3", 00:15:37.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.010 "is_configured": false, 00:15:37.010 "data_offset": 0, 00:15:37.010 "data_size": 0 00:15:37.010 } 00:15:37.010 ] 00:15:37.010 }' 00:15:37.010 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.010 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.269 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.269 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.269 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.529 [2024-12-13 08:26:49.634577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.529 BaseBdev2 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.529 [ 00:15:37.529 { 00:15:37.529 "name": "BaseBdev2", 00:15:37.529 "aliases": [ 00:15:37.529 "72c12e2e-315a-4023-9718-e15c753aa32c" 00:15:37.529 ], 00:15:37.529 "product_name": "Malloc disk", 00:15:37.529 "block_size": 512, 00:15:37.529 "num_blocks": 65536, 00:15:37.529 "uuid": "72c12e2e-315a-4023-9718-e15c753aa32c", 00:15:37.529 "assigned_rate_limits": { 00:15:37.529 "rw_ios_per_sec": 0, 00:15:37.529 "rw_mbytes_per_sec": 0, 00:15:37.529 "r_mbytes_per_sec": 0, 00:15:37.529 "w_mbytes_per_sec": 0 00:15:37.529 }, 00:15:37.529 "claimed": true, 00:15:37.529 "claim_type": "exclusive_write", 00:15:37.529 "zoned": false, 00:15:37.529 "supported_io_types": { 00:15:37.529 "read": true, 00:15:37.529 "write": true, 00:15:37.529 "unmap": true, 00:15:37.529 "flush": true, 00:15:37.529 "reset": true, 00:15:37.529 "nvme_admin": false, 00:15:37.529 "nvme_io": false, 00:15:37.529 "nvme_io_md": false, 00:15:37.529 "write_zeroes": true, 00:15:37.529 "zcopy": true, 00:15:37.529 "get_zone_info": false, 00:15:37.529 "zone_management": false, 00:15:37.529 "zone_append": false, 00:15:37.529 "compare": false, 00:15:37.529 "compare_and_write": false, 00:15:37.529 "abort": true, 00:15:37.529 "seek_hole": false, 00:15:37.529 "seek_data": false, 00:15:37.529 "copy": true, 00:15:37.529 "nvme_iov_md": false 00:15:37.529 }, 00:15:37.529 "memory_domains": [ 00:15:37.529 { 00:15:37.529 "dma_device_id": "system", 00:15:37.529 "dma_device_type": 1 00:15:37.529 }, 00:15:37.529 { 00:15:37.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.529 "dma_device_type": 2 00:15:37.529 } 00:15:37.529 ], 00:15:37.529 "driver_specific": {} 00:15:37.529 } 00:15:37.529 ] 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.529 "name": "Existed_Raid", 00:15:37.529 "uuid": "101c6aec-1f09-4b7a-bc47-f75686f8d5ae", 00:15:37.529 "strip_size_kb": 64, 00:15:37.529 "state": "configuring", 00:15:37.529 "raid_level": "raid5f", 00:15:37.529 "superblock": true, 00:15:37.529 "num_base_bdevs": 3, 00:15:37.529 "num_base_bdevs_discovered": 2, 00:15:37.529 "num_base_bdevs_operational": 3, 00:15:37.529 "base_bdevs_list": [ 00:15:37.529 { 00:15:37.529 "name": "BaseBdev1", 00:15:37.529 "uuid": "2d06b752-8f00-4e93-b43b-4a28f4ba1317", 00:15:37.529 "is_configured": true, 00:15:37.529 "data_offset": 2048, 00:15:37.529 "data_size": 63488 00:15:37.529 }, 00:15:37.529 { 00:15:37.529 "name": "BaseBdev2", 00:15:37.529 "uuid": "72c12e2e-315a-4023-9718-e15c753aa32c", 00:15:37.529 "is_configured": true, 00:15:37.529 "data_offset": 2048, 00:15:37.529 "data_size": 63488 00:15:37.529 }, 00:15:37.529 { 00:15:37.529 "name": "BaseBdev3", 00:15:37.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.529 "is_configured": false, 00:15:37.529 "data_offset": 0, 00:15:37.529 "data_size": 0 00:15:37.529 } 00:15:37.529 ] 00:15:37.529 }' 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.529 08:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.788 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.789 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.789 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.048 [2024-12-13 08:26:50.183421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.048 [2024-12-13 08:26:50.183799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:38.048 [2024-12-13 08:26:50.183860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.048 [2024-12-13 08:26:50.184167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:38.048 BaseBdev3 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.048 [2024-12-13 08:26:50.190258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:38.048 [2024-12-13 08:26:50.190279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:38.048 [2024-12-13 08:26:50.190549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.048 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.048 [ 00:15:38.048 { 00:15:38.048 "name": "BaseBdev3", 00:15:38.048 "aliases": [ 00:15:38.048 "c13b5fd0-4709-404a-ac1b-248e3580ccc3" 00:15:38.048 ], 00:15:38.048 "product_name": "Malloc disk", 00:15:38.048 "block_size": 512, 00:15:38.048 "num_blocks": 65536, 00:15:38.048 "uuid": "c13b5fd0-4709-404a-ac1b-248e3580ccc3", 00:15:38.048 "assigned_rate_limits": { 00:15:38.048 "rw_ios_per_sec": 0, 00:15:38.048 "rw_mbytes_per_sec": 0, 00:15:38.048 "r_mbytes_per_sec": 0, 00:15:38.048 "w_mbytes_per_sec": 0 00:15:38.048 }, 00:15:38.048 "claimed": true, 00:15:38.048 "claim_type": "exclusive_write", 00:15:38.048 "zoned": false, 00:15:38.048 "supported_io_types": { 00:15:38.048 "read": true, 00:15:38.048 "write": true, 00:15:38.048 "unmap": true, 00:15:38.048 "flush": true, 00:15:38.048 "reset": true, 00:15:38.048 "nvme_admin": false, 00:15:38.048 "nvme_io": false, 00:15:38.048 "nvme_io_md": false, 00:15:38.048 "write_zeroes": true, 00:15:38.048 "zcopy": true, 00:15:38.048 "get_zone_info": false, 00:15:38.048 "zone_management": false, 00:15:38.048 "zone_append": false, 00:15:38.048 "compare": false, 00:15:38.048 "compare_and_write": false, 00:15:38.048 "abort": true, 00:15:38.048 "seek_hole": false, 00:15:38.048 "seek_data": false, 00:15:38.048 "copy": true, 00:15:38.048 "nvme_iov_md": false 00:15:38.048 }, 00:15:38.048 "memory_domains": [ 00:15:38.049 { 00:15:38.049 "dma_device_id": "system", 00:15:38.049 "dma_device_type": 1 00:15:38.049 }, 00:15:38.049 { 00:15:38.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.049 "dma_device_type": 2 00:15:38.049 } 00:15:38.049 ], 00:15:38.049 "driver_specific": {} 00:15:38.049 } 00:15:38.049 ] 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.049 "name": "Existed_Raid", 00:15:38.049 "uuid": "101c6aec-1f09-4b7a-bc47-f75686f8d5ae", 00:15:38.049 "strip_size_kb": 64, 00:15:38.049 "state": "online", 00:15:38.049 "raid_level": "raid5f", 00:15:38.049 "superblock": true, 00:15:38.049 "num_base_bdevs": 3, 00:15:38.049 "num_base_bdevs_discovered": 3, 00:15:38.049 "num_base_bdevs_operational": 3, 00:15:38.049 "base_bdevs_list": [ 00:15:38.049 { 00:15:38.049 "name": "BaseBdev1", 00:15:38.049 "uuid": "2d06b752-8f00-4e93-b43b-4a28f4ba1317", 00:15:38.049 "is_configured": true, 00:15:38.049 "data_offset": 2048, 00:15:38.049 "data_size": 63488 00:15:38.049 }, 00:15:38.049 { 00:15:38.049 "name": "BaseBdev2", 00:15:38.049 "uuid": "72c12e2e-315a-4023-9718-e15c753aa32c", 00:15:38.049 "is_configured": true, 00:15:38.049 "data_offset": 2048, 00:15:38.049 "data_size": 63488 00:15:38.049 }, 00:15:38.049 { 00:15:38.049 "name": "BaseBdev3", 00:15:38.049 "uuid": "c13b5fd0-4709-404a-ac1b-248e3580ccc3", 00:15:38.049 "is_configured": true, 00:15:38.049 "data_offset": 2048, 00:15:38.049 "data_size": 63488 00:15:38.049 } 00:15:38.049 ] 00:15:38.049 }' 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.049 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.308 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:38.308 [2024-12-13 08:26:50.656454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.309 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:38.569 "name": "Existed_Raid", 00:15:38.569 "aliases": [ 00:15:38.569 "101c6aec-1f09-4b7a-bc47-f75686f8d5ae" 00:15:38.569 ], 00:15:38.569 "product_name": "Raid Volume", 00:15:38.569 "block_size": 512, 00:15:38.569 "num_blocks": 126976, 00:15:38.569 "uuid": "101c6aec-1f09-4b7a-bc47-f75686f8d5ae", 00:15:38.569 "assigned_rate_limits": { 00:15:38.569 "rw_ios_per_sec": 0, 00:15:38.569 "rw_mbytes_per_sec": 0, 00:15:38.569 "r_mbytes_per_sec": 0, 00:15:38.569 "w_mbytes_per_sec": 0 00:15:38.569 }, 00:15:38.569 "claimed": false, 00:15:38.569 "zoned": false, 00:15:38.569 "supported_io_types": { 00:15:38.569 "read": true, 00:15:38.569 "write": true, 00:15:38.569 "unmap": false, 00:15:38.569 "flush": false, 00:15:38.569 "reset": true, 00:15:38.569 "nvme_admin": false, 00:15:38.569 "nvme_io": false, 00:15:38.569 "nvme_io_md": false, 00:15:38.569 "write_zeroes": true, 00:15:38.569 "zcopy": false, 00:15:38.569 "get_zone_info": false, 00:15:38.569 "zone_management": false, 00:15:38.569 "zone_append": false, 00:15:38.569 "compare": false, 00:15:38.569 "compare_and_write": false, 00:15:38.569 "abort": false, 00:15:38.569 "seek_hole": false, 00:15:38.569 "seek_data": false, 00:15:38.569 "copy": false, 00:15:38.569 "nvme_iov_md": false 00:15:38.569 }, 00:15:38.569 "driver_specific": { 00:15:38.569 "raid": { 00:15:38.569 "uuid": "101c6aec-1f09-4b7a-bc47-f75686f8d5ae", 00:15:38.569 "strip_size_kb": 64, 00:15:38.569 "state": "online", 00:15:38.569 "raid_level": "raid5f", 00:15:38.569 "superblock": true, 00:15:38.569 "num_base_bdevs": 3, 00:15:38.569 "num_base_bdevs_discovered": 3, 00:15:38.569 "num_base_bdevs_operational": 3, 00:15:38.569 "base_bdevs_list": [ 00:15:38.569 { 00:15:38.569 "name": "BaseBdev1", 00:15:38.569 "uuid": "2d06b752-8f00-4e93-b43b-4a28f4ba1317", 00:15:38.569 "is_configured": true, 00:15:38.569 "data_offset": 2048, 00:15:38.569 "data_size": 63488 00:15:38.569 }, 00:15:38.569 { 00:15:38.569 "name": "BaseBdev2", 00:15:38.569 "uuid": "72c12e2e-315a-4023-9718-e15c753aa32c", 00:15:38.569 "is_configured": true, 00:15:38.569 "data_offset": 2048, 00:15:38.569 "data_size": 63488 00:15:38.569 }, 00:15:38.569 { 00:15:38.569 "name": "BaseBdev3", 00:15:38.569 "uuid": "c13b5fd0-4709-404a-ac1b-248e3580ccc3", 00:15:38.569 "is_configured": true, 00:15:38.569 "data_offset": 2048, 00:15:38.569 "data_size": 63488 00:15:38.569 } 00:15:38.569 ] 00:15:38.569 } 00:15:38.569 } 00:15:38.569 }' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:38.569 BaseBdev2 00:15:38.569 BaseBdev3' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.569 08:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.569 [2024-12-13 08:26:50.923889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.829 "name": "Existed_Raid", 00:15:38.829 "uuid": "101c6aec-1f09-4b7a-bc47-f75686f8d5ae", 00:15:38.829 "strip_size_kb": 64, 00:15:38.829 "state": "online", 00:15:38.829 "raid_level": "raid5f", 00:15:38.829 "superblock": true, 00:15:38.829 "num_base_bdevs": 3, 00:15:38.829 "num_base_bdevs_discovered": 2, 00:15:38.829 "num_base_bdevs_operational": 2, 00:15:38.829 "base_bdevs_list": [ 00:15:38.829 { 00:15:38.829 "name": null, 00:15:38.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.829 "is_configured": false, 00:15:38.829 "data_offset": 0, 00:15:38.829 "data_size": 63488 00:15:38.829 }, 00:15:38.829 { 00:15:38.829 "name": "BaseBdev2", 00:15:38.829 "uuid": "72c12e2e-315a-4023-9718-e15c753aa32c", 00:15:38.829 "is_configured": true, 00:15:38.829 "data_offset": 2048, 00:15:38.829 "data_size": 63488 00:15:38.829 }, 00:15:38.829 { 00:15:38.829 "name": "BaseBdev3", 00:15:38.829 "uuid": "c13b5fd0-4709-404a-ac1b-248e3580ccc3", 00:15:38.829 "is_configured": true, 00:15:38.829 "data_offset": 2048, 00:15:38.829 "data_size": 63488 00:15:38.829 } 00:15:38.829 ] 00:15:38.829 }' 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.829 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.398 [2024-12-13 08:26:51.522386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.398 [2024-12-13 08:26:51.522580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.398 [2024-12-13 08:26:51.620136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.398 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.398 [2024-12-13 08:26:51.680081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.398 [2024-12-13 08:26:51.680211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.658 BaseBdev2 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.658 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.658 [ 00:15:39.658 { 00:15:39.658 "name": "BaseBdev2", 00:15:39.658 "aliases": [ 00:15:39.658 "14ecddda-b00c-4c37-99fb-8d1a92f2351f" 00:15:39.658 ], 00:15:39.658 "product_name": "Malloc disk", 00:15:39.658 "block_size": 512, 00:15:39.658 "num_blocks": 65536, 00:15:39.658 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:39.658 "assigned_rate_limits": { 00:15:39.658 "rw_ios_per_sec": 0, 00:15:39.658 "rw_mbytes_per_sec": 0, 00:15:39.658 "r_mbytes_per_sec": 0, 00:15:39.658 "w_mbytes_per_sec": 0 00:15:39.658 }, 00:15:39.658 "claimed": false, 00:15:39.658 "zoned": false, 00:15:39.658 "supported_io_types": { 00:15:39.658 "read": true, 00:15:39.658 "write": true, 00:15:39.658 "unmap": true, 00:15:39.658 "flush": true, 00:15:39.658 "reset": true, 00:15:39.658 "nvme_admin": false, 00:15:39.658 "nvme_io": false, 00:15:39.658 "nvme_io_md": false, 00:15:39.658 "write_zeroes": true, 00:15:39.658 "zcopy": true, 00:15:39.658 "get_zone_info": false, 00:15:39.658 "zone_management": false, 00:15:39.658 "zone_append": false, 00:15:39.658 "compare": false, 00:15:39.658 "compare_and_write": false, 00:15:39.658 "abort": true, 00:15:39.658 "seek_hole": false, 00:15:39.658 "seek_data": false, 00:15:39.658 "copy": true, 00:15:39.659 "nvme_iov_md": false 00:15:39.659 }, 00:15:39.659 "memory_domains": [ 00:15:39.659 { 00:15:39.659 "dma_device_id": "system", 00:15:39.659 "dma_device_type": 1 00:15:39.659 }, 00:15:39.659 { 00:15:39.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.659 "dma_device_type": 2 00:15:39.659 } 00:15:39.659 ], 00:15:39.659 "driver_specific": {} 00:15:39.659 } 00:15:39.659 ] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.659 BaseBdev3 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.659 [ 00:15:39.659 { 00:15:39.659 "name": "BaseBdev3", 00:15:39.659 "aliases": [ 00:15:39.659 "7ed1e5e0-0c7b-4815-b233-6c6939b73689" 00:15:39.659 ], 00:15:39.659 "product_name": "Malloc disk", 00:15:39.659 "block_size": 512, 00:15:39.659 "num_blocks": 65536, 00:15:39.659 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:39.659 "assigned_rate_limits": { 00:15:39.659 "rw_ios_per_sec": 0, 00:15:39.659 "rw_mbytes_per_sec": 0, 00:15:39.659 "r_mbytes_per_sec": 0, 00:15:39.659 "w_mbytes_per_sec": 0 00:15:39.659 }, 00:15:39.659 "claimed": false, 00:15:39.659 "zoned": false, 00:15:39.659 "supported_io_types": { 00:15:39.659 "read": true, 00:15:39.659 "write": true, 00:15:39.659 "unmap": true, 00:15:39.659 "flush": true, 00:15:39.659 "reset": true, 00:15:39.659 "nvme_admin": false, 00:15:39.659 "nvme_io": false, 00:15:39.659 "nvme_io_md": false, 00:15:39.659 "write_zeroes": true, 00:15:39.659 "zcopy": true, 00:15:39.659 "get_zone_info": false, 00:15:39.659 "zone_management": false, 00:15:39.659 "zone_append": false, 00:15:39.659 "compare": false, 00:15:39.659 "compare_and_write": false, 00:15:39.659 "abort": true, 00:15:39.659 "seek_hole": false, 00:15:39.659 "seek_data": false, 00:15:39.659 "copy": true, 00:15:39.659 "nvme_iov_md": false 00:15:39.659 }, 00:15:39.659 "memory_domains": [ 00:15:39.659 { 00:15:39.659 "dma_device_id": "system", 00:15:39.659 "dma_device_type": 1 00:15:39.659 }, 00:15:39.659 { 00:15:39.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.659 "dma_device_type": 2 00:15:39.659 } 00:15:39.659 ], 00:15:39.659 "driver_specific": {} 00:15:39.659 } 00:15:39.659 ] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.659 [2024-12-13 08:26:51.985174] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.659 [2024-12-13 08:26:51.985259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.659 [2024-12-13 08:26:51.985302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.659 [2024-12-13 08:26:51.987128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.659 08:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.659 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.919 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.919 "name": "Existed_Raid", 00:15:39.919 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:39.919 "strip_size_kb": 64, 00:15:39.919 "state": "configuring", 00:15:39.919 "raid_level": "raid5f", 00:15:39.919 "superblock": true, 00:15:39.919 "num_base_bdevs": 3, 00:15:39.919 "num_base_bdevs_discovered": 2, 00:15:39.919 "num_base_bdevs_operational": 3, 00:15:39.919 "base_bdevs_list": [ 00:15:39.919 { 00:15:39.919 "name": "BaseBdev1", 00:15:39.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.919 "is_configured": false, 00:15:39.919 "data_offset": 0, 00:15:39.919 "data_size": 0 00:15:39.919 }, 00:15:39.919 { 00:15:39.919 "name": "BaseBdev2", 00:15:39.919 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:39.919 "is_configured": true, 00:15:39.919 "data_offset": 2048, 00:15:39.919 "data_size": 63488 00:15:39.919 }, 00:15:39.919 { 00:15:39.919 "name": "BaseBdev3", 00:15:39.919 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:39.919 "is_configured": true, 00:15:39.919 "data_offset": 2048, 00:15:39.919 "data_size": 63488 00:15:39.919 } 00:15:39.919 ] 00:15:39.919 }' 00:15:39.919 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.919 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.179 [2024-12-13 08:26:52.488722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.179 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.179 "name": "Existed_Raid", 00:15:40.179 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:40.179 "strip_size_kb": 64, 00:15:40.179 "state": "configuring", 00:15:40.179 "raid_level": "raid5f", 00:15:40.179 "superblock": true, 00:15:40.179 "num_base_bdevs": 3, 00:15:40.179 "num_base_bdevs_discovered": 1, 00:15:40.179 "num_base_bdevs_operational": 3, 00:15:40.179 "base_bdevs_list": [ 00:15:40.179 { 00:15:40.179 "name": "BaseBdev1", 00:15:40.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.179 "is_configured": false, 00:15:40.179 "data_offset": 0, 00:15:40.179 "data_size": 0 00:15:40.179 }, 00:15:40.179 { 00:15:40.179 "name": null, 00:15:40.179 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:40.179 "is_configured": false, 00:15:40.179 "data_offset": 0, 00:15:40.179 "data_size": 63488 00:15:40.179 }, 00:15:40.179 { 00:15:40.179 "name": "BaseBdev3", 00:15:40.179 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:40.179 "is_configured": true, 00:15:40.179 "data_offset": 2048, 00:15:40.179 "data_size": 63488 00:15:40.179 } 00:15:40.179 ] 00:15:40.179 }' 00:15:40.439 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.439 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.705 08:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.705 [2024-12-13 08:26:53.026145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.705 BaseBdev1 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.705 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.705 [ 00:15:40.705 { 00:15:40.705 "name": "BaseBdev1", 00:15:40.705 "aliases": [ 00:15:40.705 "892ef622-ec25-47cc-a7bd-67af9d79226f" 00:15:40.705 ], 00:15:40.705 "product_name": "Malloc disk", 00:15:40.705 "block_size": 512, 00:15:40.705 "num_blocks": 65536, 00:15:40.705 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:40.705 "assigned_rate_limits": { 00:15:40.705 "rw_ios_per_sec": 0, 00:15:40.705 "rw_mbytes_per_sec": 0, 00:15:40.705 "r_mbytes_per_sec": 0, 00:15:40.705 "w_mbytes_per_sec": 0 00:15:40.705 }, 00:15:40.705 "claimed": true, 00:15:40.705 "claim_type": "exclusive_write", 00:15:40.705 "zoned": false, 00:15:40.705 "supported_io_types": { 00:15:40.705 "read": true, 00:15:40.705 "write": true, 00:15:40.705 "unmap": true, 00:15:40.705 "flush": true, 00:15:40.705 "reset": true, 00:15:40.705 "nvme_admin": false, 00:15:40.705 "nvme_io": false, 00:15:40.706 "nvme_io_md": false, 00:15:40.706 "write_zeroes": true, 00:15:40.706 "zcopy": true, 00:15:40.706 "get_zone_info": false, 00:15:40.706 "zone_management": false, 00:15:40.706 "zone_append": false, 00:15:40.706 "compare": false, 00:15:40.706 "compare_and_write": false, 00:15:40.706 "abort": true, 00:15:40.706 "seek_hole": false, 00:15:40.706 "seek_data": false, 00:15:40.706 "copy": true, 00:15:40.706 "nvme_iov_md": false 00:15:40.706 }, 00:15:40.706 "memory_domains": [ 00:15:40.706 { 00:15:40.706 "dma_device_id": "system", 00:15:40.706 "dma_device_type": 1 00:15:40.706 }, 00:15:40.706 { 00:15:40.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.706 "dma_device_type": 2 00:15:40.706 } 00:15:40.706 ], 00:15:40.706 "driver_specific": {} 00:15:40.706 } 00:15:40.706 ] 00:15:40.706 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.706 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:40.706 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.997 "name": "Existed_Raid", 00:15:40.997 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:40.997 "strip_size_kb": 64, 00:15:40.997 "state": "configuring", 00:15:40.997 "raid_level": "raid5f", 00:15:40.997 "superblock": true, 00:15:40.997 "num_base_bdevs": 3, 00:15:40.997 "num_base_bdevs_discovered": 2, 00:15:40.997 "num_base_bdevs_operational": 3, 00:15:40.997 "base_bdevs_list": [ 00:15:40.997 { 00:15:40.997 "name": "BaseBdev1", 00:15:40.997 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:40.997 "is_configured": true, 00:15:40.997 "data_offset": 2048, 00:15:40.997 "data_size": 63488 00:15:40.997 }, 00:15:40.997 { 00:15:40.997 "name": null, 00:15:40.997 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:40.997 "is_configured": false, 00:15:40.997 "data_offset": 0, 00:15:40.997 "data_size": 63488 00:15:40.997 }, 00:15:40.997 { 00:15:40.997 "name": "BaseBdev3", 00:15:40.997 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:40.997 "is_configured": true, 00:15:40.997 "data_offset": 2048, 00:15:40.997 "data_size": 63488 00:15:40.997 } 00:15:40.997 ] 00:15:40.997 }' 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.997 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.257 [2024-12-13 08:26:53.573260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.257 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.258 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.258 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.258 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.258 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.258 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.258 "name": "Existed_Raid", 00:15:41.258 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:41.258 "strip_size_kb": 64, 00:15:41.258 "state": "configuring", 00:15:41.258 "raid_level": "raid5f", 00:15:41.258 "superblock": true, 00:15:41.258 "num_base_bdevs": 3, 00:15:41.258 "num_base_bdevs_discovered": 1, 00:15:41.258 "num_base_bdevs_operational": 3, 00:15:41.258 "base_bdevs_list": [ 00:15:41.258 { 00:15:41.258 "name": "BaseBdev1", 00:15:41.258 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:41.258 "is_configured": true, 00:15:41.258 "data_offset": 2048, 00:15:41.258 "data_size": 63488 00:15:41.258 }, 00:15:41.258 { 00:15:41.258 "name": null, 00:15:41.258 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:41.258 "is_configured": false, 00:15:41.258 "data_offset": 0, 00:15:41.258 "data_size": 63488 00:15:41.258 }, 00:15:41.258 { 00:15:41.258 "name": null, 00:15:41.258 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:41.258 "is_configured": false, 00:15:41.258 "data_offset": 0, 00:15:41.258 "data_size": 63488 00:15:41.258 } 00:15:41.258 ] 00:15:41.258 }' 00:15:41.258 08:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.258 08:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.825 [2024-12-13 08:26:54.096436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.825 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.826 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.826 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.826 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.826 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.826 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.826 "name": "Existed_Raid", 00:15:41.826 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:41.826 "strip_size_kb": 64, 00:15:41.826 "state": "configuring", 00:15:41.826 "raid_level": "raid5f", 00:15:41.826 "superblock": true, 00:15:41.826 "num_base_bdevs": 3, 00:15:41.826 "num_base_bdevs_discovered": 2, 00:15:41.826 "num_base_bdevs_operational": 3, 00:15:41.826 "base_bdevs_list": [ 00:15:41.826 { 00:15:41.826 "name": "BaseBdev1", 00:15:41.826 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:41.826 "is_configured": true, 00:15:41.826 "data_offset": 2048, 00:15:41.826 "data_size": 63488 00:15:41.826 }, 00:15:41.826 { 00:15:41.826 "name": null, 00:15:41.826 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:41.826 "is_configured": false, 00:15:41.826 "data_offset": 0, 00:15:41.826 "data_size": 63488 00:15:41.826 }, 00:15:41.826 { 00:15:41.826 "name": "BaseBdev3", 00:15:41.826 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:41.826 "is_configured": true, 00:15:41.826 "data_offset": 2048, 00:15:41.826 "data_size": 63488 00:15:41.826 } 00:15:41.826 ] 00:15:41.826 }' 00:15:41.826 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.826 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.394 [2024-12-13 08:26:54.655511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.394 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.653 "name": "Existed_Raid", 00:15:42.653 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:42.653 "strip_size_kb": 64, 00:15:42.653 "state": "configuring", 00:15:42.653 "raid_level": "raid5f", 00:15:42.653 "superblock": true, 00:15:42.653 "num_base_bdevs": 3, 00:15:42.653 "num_base_bdevs_discovered": 1, 00:15:42.653 "num_base_bdevs_operational": 3, 00:15:42.653 "base_bdevs_list": [ 00:15:42.653 { 00:15:42.653 "name": null, 00:15:42.653 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:42.653 "is_configured": false, 00:15:42.653 "data_offset": 0, 00:15:42.653 "data_size": 63488 00:15:42.653 }, 00:15:42.653 { 00:15:42.653 "name": null, 00:15:42.653 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:42.653 "is_configured": false, 00:15:42.653 "data_offset": 0, 00:15:42.653 "data_size": 63488 00:15:42.653 }, 00:15:42.653 { 00:15:42.653 "name": "BaseBdev3", 00:15:42.653 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:42.653 "is_configured": true, 00:15:42.653 "data_offset": 2048, 00:15:42.653 "data_size": 63488 00:15:42.653 } 00:15:42.653 ] 00:15:42.653 }' 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.653 08:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.911 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.911 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.911 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.911 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.911 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.170 [2024-12-13 08:26:55.280720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.170 "name": "Existed_Raid", 00:15:43.170 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:43.170 "strip_size_kb": 64, 00:15:43.170 "state": "configuring", 00:15:43.170 "raid_level": "raid5f", 00:15:43.170 "superblock": true, 00:15:43.170 "num_base_bdevs": 3, 00:15:43.170 "num_base_bdevs_discovered": 2, 00:15:43.170 "num_base_bdevs_operational": 3, 00:15:43.170 "base_bdevs_list": [ 00:15:43.170 { 00:15:43.170 "name": null, 00:15:43.170 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:43.170 "is_configured": false, 00:15:43.170 "data_offset": 0, 00:15:43.170 "data_size": 63488 00:15:43.170 }, 00:15:43.170 { 00:15:43.170 "name": "BaseBdev2", 00:15:43.170 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:43.170 "is_configured": true, 00:15:43.170 "data_offset": 2048, 00:15:43.170 "data_size": 63488 00:15:43.170 }, 00:15:43.170 { 00:15:43.170 "name": "BaseBdev3", 00:15:43.170 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:43.170 "is_configured": true, 00:15:43.170 "data_offset": 2048, 00:15:43.170 "data_size": 63488 00:15:43.170 } 00:15:43.170 ] 00:15:43.170 }' 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.170 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.429 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:43.429 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.429 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.429 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 892ef622-ec25-47cc-a7bd-67af9d79226f 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 [2024-12-13 08:26:55.893617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:43.689 [2024-12-13 08:26:55.893959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:43.689 [2024-12-13 08:26:55.894026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.689 [2024-12-13 08:26:55.894365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:43.689 NewBaseBdev 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 [2024-12-13 08:26:55.900214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:43.689 [2024-12-13 08:26:55.900283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:43.689 [2024-12-13 08:26:55.900548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 [ 00:15:43.689 { 00:15:43.689 "name": "NewBaseBdev", 00:15:43.689 "aliases": [ 00:15:43.689 "892ef622-ec25-47cc-a7bd-67af9d79226f" 00:15:43.689 ], 00:15:43.689 "product_name": "Malloc disk", 00:15:43.689 "block_size": 512, 00:15:43.689 "num_blocks": 65536, 00:15:43.689 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:43.689 "assigned_rate_limits": { 00:15:43.689 "rw_ios_per_sec": 0, 00:15:43.689 "rw_mbytes_per_sec": 0, 00:15:43.689 "r_mbytes_per_sec": 0, 00:15:43.689 "w_mbytes_per_sec": 0 00:15:43.689 }, 00:15:43.689 "claimed": true, 00:15:43.689 "claim_type": "exclusive_write", 00:15:43.689 "zoned": false, 00:15:43.689 "supported_io_types": { 00:15:43.689 "read": true, 00:15:43.689 "write": true, 00:15:43.689 "unmap": true, 00:15:43.689 "flush": true, 00:15:43.689 "reset": true, 00:15:43.689 "nvme_admin": false, 00:15:43.689 "nvme_io": false, 00:15:43.689 "nvme_io_md": false, 00:15:43.689 "write_zeroes": true, 00:15:43.689 "zcopy": true, 00:15:43.689 "get_zone_info": false, 00:15:43.689 "zone_management": false, 00:15:43.689 "zone_append": false, 00:15:43.689 "compare": false, 00:15:43.689 "compare_and_write": false, 00:15:43.689 "abort": true, 00:15:43.689 "seek_hole": false, 00:15:43.689 "seek_data": false, 00:15:43.689 "copy": true, 00:15:43.689 "nvme_iov_md": false 00:15:43.689 }, 00:15:43.689 "memory_domains": [ 00:15:43.689 { 00:15:43.689 "dma_device_id": "system", 00:15:43.689 "dma_device_type": 1 00:15:43.689 }, 00:15:43.689 { 00:15:43.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.689 "dma_device_type": 2 00:15:43.689 } 00:15:43.689 ], 00:15:43.689 "driver_specific": {} 00:15:43.689 } 00:15:43.689 ] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.689 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.689 "name": "Existed_Raid", 00:15:43.689 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:43.689 "strip_size_kb": 64, 00:15:43.689 "state": "online", 00:15:43.689 "raid_level": "raid5f", 00:15:43.689 "superblock": true, 00:15:43.689 "num_base_bdevs": 3, 00:15:43.689 "num_base_bdevs_discovered": 3, 00:15:43.689 "num_base_bdevs_operational": 3, 00:15:43.690 "base_bdevs_list": [ 00:15:43.690 { 00:15:43.690 "name": "NewBaseBdev", 00:15:43.690 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:43.690 "is_configured": true, 00:15:43.690 "data_offset": 2048, 00:15:43.690 "data_size": 63488 00:15:43.690 }, 00:15:43.690 { 00:15:43.690 "name": "BaseBdev2", 00:15:43.690 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:43.690 "is_configured": true, 00:15:43.690 "data_offset": 2048, 00:15:43.690 "data_size": 63488 00:15:43.690 }, 00:15:43.690 { 00:15:43.690 "name": "BaseBdev3", 00:15:43.690 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:43.690 "is_configured": true, 00:15:43.690 "data_offset": 2048, 00:15:43.690 "data_size": 63488 00:15:43.690 } 00:15:43.690 ] 00:15:43.690 }' 00:15:43.690 08:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.690 08:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.259 [2024-12-13 08:26:56.450491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:44.259 "name": "Existed_Raid", 00:15:44.259 "aliases": [ 00:15:44.259 "db6cd2d9-2588-41ef-b68d-0058be382834" 00:15:44.259 ], 00:15:44.259 "product_name": "Raid Volume", 00:15:44.259 "block_size": 512, 00:15:44.259 "num_blocks": 126976, 00:15:44.259 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:44.259 "assigned_rate_limits": { 00:15:44.259 "rw_ios_per_sec": 0, 00:15:44.259 "rw_mbytes_per_sec": 0, 00:15:44.259 "r_mbytes_per_sec": 0, 00:15:44.259 "w_mbytes_per_sec": 0 00:15:44.259 }, 00:15:44.259 "claimed": false, 00:15:44.259 "zoned": false, 00:15:44.259 "supported_io_types": { 00:15:44.259 "read": true, 00:15:44.259 "write": true, 00:15:44.259 "unmap": false, 00:15:44.259 "flush": false, 00:15:44.259 "reset": true, 00:15:44.259 "nvme_admin": false, 00:15:44.259 "nvme_io": false, 00:15:44.259 "nvme_io_md": false, 00:15:44.259 "write_zeroes": true, 00:15:44.259 "zcopy": false, 00:15:44.259 "get_zone_info": false, 00:15:44.259 "zone_management": false, 00:15:44.259 "zone_append": false, 00:15:44.259 "compare": false, 00:15:44.259 "compare_and_write": false, 00:15:44.259 "abort": false, 00:15:44.259 "seek_hole": false, 00:15:44.259 "seek_data": false, 00:15:44.259 "copy": false, 00:15:44.259 "nvme_iov_md": false 00:15:44.259 }, 00:15:44.259 "driver_specific": { 00:15:44.259 "raid": { 00:15:44.259 "uuid": "db6cd2d9-2588-41ef-b68d-0058be382834", 00:15:44.259 "strip_size_kb": 64, 00:15:44.259 "state": "online", 00:15:44.259 "raid_level": "raid5f", 00:15:44.259 "superblock": true, 00:15:44.259 "num_base_bdevs": 3, 00:15:44.259 "num_base_bdevs_discovered": 3, 00:15:44.259 "num_base_bdevs_operational": 3, 00:15:44.259 "base_bdevs_list": [ 00:15:44.259 { 00:15:44.259 "name": "NewBaseBdev", 00:15:44.259 "uuid": "892ef622-ec25-47cc-a7bd-67af9d79226f", 00:15:44.259 "is_configured": true, 00:15:44.259 "data_offset": 2048, 00:15:44.259 "data_size": 63488 00:15:44.259 }, 00:15:44.259 { 00:15:44.259 "name": "BaseBdev2", 00:15:44.259 "uuid": "14ecddda-b00c-4c37-99fb-8d1a92f2351f", 00:15:44.259 "is_configured": true, 00:15:44.259 "data_offset": 2048, 00:15:44.259 "data_size": 63488 00:15:44.259 }, 00:15:44.259 { 00:15:44.259 "name": "BaseBdev3", 00:15:44.259 "uuid": "7ed1e5e0-0c7b-4815-b233-6c6939b73689", 00:15:44.259 "is_configured": true, 00:15:44.259 "data_offset": 2048, 00:15:44.259 "data_size": 63488 00:15:44.259 } 00:15:44.259 ] 00:15:44.259 } 00:15:44.259 } 00:15:44.259 }' 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:44.259 BaseBdev2 00:15:44.259 BaseBdev3' 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.259 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.518 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.518 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.518 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.518 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:44.518 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.518 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.518 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.519 [2024-12-13 08:26:56.765750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.519 [2024-12-13 08:26:56.765823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.519 [2024-12-13 08:26:56.765937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.519 [2024-12-13 08:26:56.766284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.519 [2024-12-13 08:26:56.766355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80681 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80681 ']' 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80681 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80681 00:15:44.519 killing process with pid 80681 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80681' 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80681 00:15:44.519 [2024-12-13 08:26:56.807440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.519 08:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80681 00:15:44.778 [2024-12-13 08:26:57.110033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.157 ************************************ 00:15:46.157 END TEST raid5f_state_function_test_sb 00:15:46.157 ************************************ 00:15:46.157 08:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:46.157 00:15:46.157 real 0m11.062s 00:15:46.157 user 0m17.642s 00:15:46.157 sys 0m2.076s 00:15:46.157 08:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.157 08:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.157 08:26:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:46.157 08:26:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:46.157 08:26:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.157 08:26:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.157 ************************************ 00:15:46.157 START TEST raid5f_superblock_test 00:15:46.157 ************************************ 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:46.157 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81312 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81312 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81312 ']' 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.158 08:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.158 [2024-12-13 08:26:58.392653] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:15:46.158 [2024-12-13 08:26:58.392865] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81312 ] 00:15:46.417 [2024-12-13 08:26:58.564431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.417 [2024-12-13 08:26:58.681701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.676 [2024-12-13 08:26:58.887045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.676 [2024-12-13 08:26:58.887237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.936 malloc1 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.936 [2024-12-13 08:26:59.294325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.936 [2024-12-13 08:26:59.294472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.936 [2024-12-13 08:26:59.294531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:46.936 [2024-12-13 08:26:59.294583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.936 [2024-12-13 08:26:59.296822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.936 [2024-12-13 08:26:59.296901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.936 pt1 00:15:46.936 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 malloc2 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 [2024-12-13 08:26:59.350435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.196 [2024-12-13 08:26:59.350541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.196 [2024-12-13 08:26:59.350579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:47.196 [2024-12-13 08:26:59.350606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.196 [2024-12-13 08:26:59.352711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.196 [2024-12-13 08:26:59.352779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.196 pt2 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 malloc3 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 [2024-12-13 08:26:59.418649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:47.196 [2024-12-13 08:26:59.418752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.196 [2024-12-13 08:26:59.418790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:47.196 [2024-12-13 08:26:59.418817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.196 [2024-12-13 08:26:59.420802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.196 [2024-12-13 08:26:59.420876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:47.196 pt3 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 [2024-12-13 08:26:59.430677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.196 [2024-12-13 08:26:59.432453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.196 [2024-12-13 08:26:59.432559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.196 [2024-12-13 08:26:59.432745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:47.196 [2024-12-13 08:26:59.432815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:47.196 [2024-12-13 08:26:59.433072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:47.196 [2024-12-13 08:26:59.438626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:47.196 [2024-12-13 08:26:59.438682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:47.196 [2024-12-13 08:26:59.438935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.196 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.197 "name": "raid_bdev1", 00:15:47.197 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:47.197 "strip_size_kb": 64, 00:15:47.197 "state": "online", 00:15:47.197 "raid_level": "raid5f", 00:15:47.197 "superblock": true, 00:15:47.197 "num_base_bdevs": 3, 00:15:47.197 "num_base_bdevs_discovered": 3, 00:15:47.197 "num_base_bdevs_operational": 3, 00:15:47.197 "base_bdevs_list": [ 00:15:47.197 { 00:15:47.197 "name": "pt1", 00:15:47.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.197 "is_configured": true, 00:15:47.197 "data_offset": 2048, 00:15:47.197 "data_size": 63488 00:15:47.197 }, 00:15:47.197 { 00:15:47.197 "name": "pt2", 00:15:47.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.197 "is_configured": true, 00:15:47.197 "data_offset": 2048, 00:15:47.197 "data_size": 63488 00:15:47.197 }, 00:15:47.197 { 00:15:47.197 "name": "pt3", 00:15:47.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.197 "is_configured": true, 00:15:47.197 "data_offset": 2048, 00:15:47.197 "data_size": 63488 00:15:47.197 } 00:15:47.197 ] 00:15:47.197 }' 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.197 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.766 [2024-12-13 08:26:59.872731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.766 "name": "raid_bdev1", 00:15:47.766 "aliases": [ 00:15:47.766 "8dd77875-04fa-4b82-98b1-03b37b0a0fa4" 00:15:47.766 ], 00:15:47.766 "product_name": "Raid Volume", 00:15:47.766 "block_size": 512, 00:15:47.766 "num_blocks": 126976, 00:15:47.766 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:47.766 "assigned_rate_limits": { 00:15:47.766 "rw_ios_per_sec": 0, 00:15:47.766 "rw_mbytes_per_sec": 0, 00:15:47.766 "r_mbytes_per_sec": 0, 00:15:47.766 "w_mbytes_per_sec": 0 00:15:47.766 }, 00:15:47.766 "claimed": false, 00:15:47.766 "zoned": false, 00:15:47.766 "supported_io_types": { 00:15:47.766 "read": true, 00:15:47.766 "write": true, 00:15:47.766 "unmap": false, 00:15:47.766 "flush": false, 00:15:47.766 "reset": true, 00:15:47.766 "nvme_admin": false, 00:15:47.766 "nvme_io": false, 00:15:47.766 "nvme_io_md": false, 00:15:47.766 "write_zeroes": true, 00:15:47.766 "zcopy": false, 00:15:47.766 "get_zone_info": false, 00:15:47.766 "zone_management": false, 00:15:47.766 "zone_append": false, 00:15:47.766 "compare": false, 00:15:47.766 "compare_and_write": false, 00:15:47.766 "abort": false, 00:15:47.766 "seek_hole": false, 00:15:47.766 "seek_data": false, 00:15:47.766 "copy": false, 00:15:47.766 "nvme_iov_md": false 00:15:47.766 }, 00:15:47.766 "driver_specific": { 00:15:47.766 "raid": { 00:15:47.766 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:47.766 "strip_size_kb": 64, 00:15:47.766 "state": "online", 00:15:47.766 "raid_level": "raid5f", 00:15:47.766 "superblock": true, 00:15:47.766 "num_base_bdevs": 3, 00:15:47.766 "num_base_bdevs_discovered": 3, 00:15:47.766 "num_base_bdevs_operational": 3, 00:15:47.766 "base_bdevs_list": [ 00:15:47.766 { 00:15:47.766 "name": "pt1", 00:15:47.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.766 "is_configured": true, 00:15:47.766 "data_offset": 2048, 00:15:47.766 "data_size": 63488 00:15:47.766 }, 00:15:47.766 { 00:15:47.766 "name": "pt2", 00:15:47.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.766 "is_configured": true, 00:15:47.766 "data_offset": 2048, 00:15:47.766 "data_size": 63488 00:15:47.766 }, 00:15:47.766 { 00:15:47.766 "name": "pt3", 00:15:47.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.766 "is_configured": true, 00:15:47.766 "data_offset": 2048, 00:15:47.766 "data_size": 63488 00:15:47.766 } 00:15:47.766 ] 00:15:47.766 } 00:15:47.766 } 00:15:47.766 }' 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:47.766 pt2 00:15:47.766 pt3' 00:15:47.766 08:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.766 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 [2024-12-13 08:27:00.180199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8dd77875-04fa-4b82-98b1-03b37b0a0fa4 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8dd77875-04fa-4b82-98b1-03b37b0a0fa4 ']' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 [2024-12-13 08:27:00.223919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.026 [2024-12-13 08:27:00.224008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.026 [2024-12-13 08:27:00.224144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.026 [2024-12-13 08:27:00.224242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.026 [2024-12-13 08:27:00.224289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:48.026 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:48.286 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:48.286 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.287 [2024-12-13 08:27:00.395652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:48.287 [2024-12-13 08:27:00.397690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:48.287 [2024-12-13 08:27:00.397792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:48.287 [2024-12-13 08:27:00.397863] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:48.287 [2024-12-13 08:27:00.397915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:48.287 [2024-12-13 08:27:00.397935] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:48.287 [2024-12-13 08:27:00.397952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.287 [2024-12-13 08:27:00.397962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:48.287 request: 00:15:48.287 { 00:15:48.287 "name": "raid_bdev1", 00:15:48.287 "raid_level": "raid5f", 00:15:48.287 "base_bdevs": [ 00:15:48.287 "malloc1", 00:15:48.287 "malloc2", 00:15:48.287 "malloc3" 00:15:48.287 ], 00:15:48.287 "strip_size_kb": 64, 00:15:48.287 "superblock": false, 00:15:48.287 "method": "bdev_raid_create", 00:15:48.287 "req_id": 1 00:15:48.287 } 00:15:48.287 Got JSON-RPC error response 00:15:48.287 response: 00:15:48.287 { 00:15:48.287 "code": -17, 00:15:48.287 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:48.287 } 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.287 [2024-12-13 08:27:00.467473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.287 [2024-12-13 08:27:00.467569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.287 [2024-12-13 08:27:00.467606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:48.287 [2024-12-13 08:27:00.467633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.287 [2024-12-13 08:27:00.469881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.287 [2024-12-13 08:27:00.469950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.287 [2024-12-13 08:27:00.470070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:48.287 [2024-12-13 08:27:00.470149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.287 pt1 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.287 "name": "raid_bdev1", 00:15:48.287 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:48.287 "strip_size_kb": 64, 00:15:48.287 "state": "configuring", 00:15:48.287 "raid_level": "raid5f", 00:15:48.287 "superblock": true, 00:15:48.287 "num_base_bdevs": 3, 00:15:48.287 "num_base_bdevs_discovered": 1, 00:15:48.287 "num_base_bdevs_operational": 3, 00:15:48.287 "base_bdevs_list": [ 00:15:48.287 { 00:15:48.287 "name": "pt1", 00:15:48.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.287 "is_configured": true, 00:15:48.287 "data_offset": 2048, 00:15:48.287 "data_size": 63488 00:15:48.287 }, 00:15:48.287 { 00:15:48.287 "name": null, 00:15:48.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.287 "is_configured": false, 00:15:48.287 "data_offset": 2048, 00:15:48.287 "data_size": 63488 00:15:48.287 }, 00:15:48.287 { 00:15:48.287 "name": null, 00:15:48.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.287 "is_configured": false, 00:15:48.287 "data_offset": 2048, 00:15:48.287 "data_size": 63488 00:15:48.287 } 00:15:48.287 ] 00:15:48.287 }' 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.287 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.864 [2024-12-13 08:27:00.922734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.864 [2024-12-13 08:27:00.922844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.864 [2024-12-13 08:27:00.922884] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:48.864 [2024-12-13 08:27:00.922911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.864 [2024-12-13 08:27:00.923417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.864 [2024-12-13 08:27:00.923485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.864 [2024-12-13 08:27:00.923606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.864 [2024-12-13 08:27:00.923664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.864 pt2 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.864 [2024-12-13 08:27:00.934691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.864 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.864 "name": "raid_bdev1", 00:15:48.864 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:48.864 "strip_size_kb": 64, 00:15:48.864 "state": "configuring", 00:15:48.865 "raid_level": "raid5f", 00:15:48.865 "superblock": true, 00:15:48.865 "num_base_bdevs": 3, 00:15:48.865 "num_base_bdevs_discovered": 1, 00:15:48.865 "num_base_bdevs_operational": 3, 00:15:48.865 "base_bdevs_list": [ 00:15:48.865 { 00:15:48.865 "name": "pt1", 00:15:48.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.865 "is_configured": true, 00:15:48.865 "data_offset": 2048, 00:15:48.865 "data_size": 63488 00:15:48.865 }, 00:15:48.865 { 00:15:48.865 "name": null, 00:15:48.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.865 "is_configured": false, 00:15:48.865 "data_offset": 0, 00:15:48.865 "data_size": 63488 00:15:48.865 }, 00:15:48.865 { 00:15:48.865 "name": null, 00:15:48.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.865 "is_configured": false, 00:15:48.865 "data_offset": 2048, 00:15:48.865 "data_size": 63488 00:15:48.865 } 00:15:48.865 ] 00:15:48.865 }' 00:15:48.865 08:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.865 08:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.126 [2024-12-13 08:27:01.425848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.126 [2024-12-13 08:27:01.425957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.126 [2024-12-13 08:27:01.425992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:49.126 [2024-12-13 08:27:01.426021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.126 [2024-12-13 08:27:01.426525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.126 [2024-12-13 08:27:01.426588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.126 [2024-12-13 08:27:01.426699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.126 [2024-12-13 08:27:01.426753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.126 pt2 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.126 [2024-12-13 08:27:01.437803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:49.126 [2024-12-13 08:27:01.437886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.126 [2024-12-13 08:27:01.437915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:49.126 [2024-12-13 08:27:01.437942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.126 [2024-12-13 08:27:01.438359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.126 [2024-12-13 08:27:01.438431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:49.126 [2024-12-13 08:27:01.438497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:49.126 [2024-12-13 08:27:01.438518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:49.126 [2024-12-13 08:27:01.438644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:49.126 [2024-12-13 08:27:01.438662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:49.126 [2024-12-13 08:27:01.438882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:49.126 [2024-12-13 08:27:01.444374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:49.126 pt3 00:15:49.126 [2024-12-13 08:27:01.444432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:49.126 [2024-12-13 08:27:01.444625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.126 "name": "raid_bdev1", 00:15:49.126 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:49.126 "strip_size_kb": 64, 00:15:49.126 "state": "online", 00:15:49.126 "raid_level": "raid5f", 00:15:49.126 "superblock": true, 00:15:49.126 "num_base_bdevs": 3, 00:15:49.126 "num_base_bdevs_discovered": 3, 00:15:49.126 "num_base_bdevs_operational": 3, 00:15:49.126 "base_bdevs_list": [ 00:15:49.126 { 00:15:49.126 "name": "pt1", 00:15:49.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.126 "is_configured": true, 00:15:49.126 "data_offset": 2048, 00:15:49.126 "data_size": 63488 00:15:49.126 }, 00:15:49.126 { 00:15:49.126 "name": "pt2", 00:15:49.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.126 "is_configured": true, 00:15:49.126 "data_offset": 2048, 00:15:49.126 "data_size": 63488 00:15:49.126 }, 00:15:49.126 { 00:15:49.126 "name": "pt3", 00:15:49.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.126 "is_configured": true, 00:15:49.126 "data_offset": 2048, 00:15:49.126 "data_size": 63488 00:15:49.126 } 00:15:49.126 ] 00:15:49.126 }' 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.126 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 [2024-12-13 08:27:01.910564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.695 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.695 "name": "raid_bdev1", 00:15:49.695 "aliases": [ 00:15:49.695 "8dd77875-04fa-4b82-98b1-03b37b0a0fa4" 00:15:49.695 ], 00:15:49.695 "product_name": "Raid Volume", 00:15:49.695 "block_size": 512, 00:15:49.695 "num_blocks": 126976, 00:15:49.695 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:49.695 "assigned_rate_limits": { 00:15:49.695 "rw_ios_per_sec": 0, 00:15:49.695 "rw_mbytes_per_sec": 0, 00:15:49.695 "r_mbytes_per_sec": 0, 00:15:49.695 "w_mbytes_per_sec": 0 00:15:49.695 }, 00:15:49.695 "claimed": false, 00:15:49.695 "zoned": false, 00:15:49.695 "supported_io_types": { 00:15:49.695 "read": true, 00:15:49.695 "write": true, 00:15:49.695 "unmap": false, 00:15:49.695 "flush": false, 00:15:49.695 "reset": true, 00:15:49.695 "nvme_admin": false, 00:15:49.695 "nvme_io": false, 00:15:49.696 "nvme_io_md": false, 00:15:49.696 "write_zeroes": true, 00:15:49.696 "zcopy": false, 00:15:49.696 "get_zone_info": false, 00:15:49.696 "zone_management": false, 00:15:49.696 "zone_append": false, 00:15:49.696 "compare": false, 00:15:49.696 "compare_and_write": false, 00:15:49.696 "abort": false, 00:15:49.696 "seek_hole": false, 00:15:49.696 "seek_data": false, 00:15:49.696 "copy": false, 00:15:49.696 "nvme_iov_md": false 00:15:49.696 }, 00:15:49.696 "driver_specific": { 00:15:49.696 "raid": { 00:15:49.696 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:49.696 "strip_size_kb": 64, 00:15:49.696 "state": "online", 00:15:49.696 "raid_level": "raid5f", 00:15:49.696 "superblock": true, 00:15:49.696 "num_base_bdevs": 3, 00:15:49.696 "num_base_bdevs_discovered": 3, 00:15:49.696 "num_base_bdevs_operational": 3, 00:15:49.696 "base_bdevs_list": [ 00:15:49.696 { 00:15:49.696 "name": "pt1", 00:15:49.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.696 "is_configured": true, 00:15:49.696 "data_offset": 2048, 00:15:49.696 "data_size": 63488 00:15:49.696 }, 00:15:49.696 { 00:15:49.696 "name": "pt2", 00:15:49.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.696 "is_configured": true, 00:15:49.696 "data_offset": 2048, 00:15:49.696 "data_size": 63488 00:15:49.696 }, 00:15:49.696 { 00:15:49.696 "name": "pt3", 00:15:49.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.696 "is_configured": true, 00:15:49.696 "data_offset": 2048, 00:15:49.696 "data_size": 63488 00:15:49.696 } 00:15:49.696 ] 00:15:49.696 } 00:15:49.696 } 00:15:49.696 }' 00:15:49.696 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.696 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:49.696 pt2 00:15:49.696 pt3' 00:15:49.696 08:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.696 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:49.696 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.696 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.696 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:49.696 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.696 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 [2024-12-13 08:27:02.178135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8dd77875-04fa-4b82-98b1-03b37b0a0fa4 '!=' 8dd77875-04fa-4b82-98b1-03b37b0a0fa4 ']' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 [2024-12-13 08:27:02.229870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.959 "name": "raid_bdev1", 00:15:49.959 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:49.959 "strip_size_kb": 64, 00:15:49.959 "state": "online", 00:15:49.959 "raid_level": "raid5f", 00:15:49.959 "superblock": true, 00:15:49.959 "num_base_bdevs": 3, 00:15:49.959 "num_base_bdevs_discovered": 2, 00:15:49.959 "num_base_bdevs_operational": 2, 00:15:49.959 "base_bdevs_list": [ 00:15:49.959 { 00:15:49.959 "name": null, 00:15:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.959 "is_configured": false, 00:15:49.959 "data_offset": 0, 00:15:49.959 "data_size": 63488 00:15:49.959 }, 00:15:49.959 { 00:15:49.959 "name": "pt2", 00:15:49.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.959 "is_configured": true, 00:15:49.959 "data_offset": 2048, 00:15:49.959 "data_size": 63488 00:15:49.959 }, 00:15:49.959 { 00:15:49.959 "name": "pt3", 00:15:49.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.959 "is_configured": true, 00:15:49.959 "data_offset": 2048, 00:15:49.959 "data_size": 63488 00:15:49.959 } 00:15:49.959 ] 00:15:49.959 }' 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.959 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.542 [2024-12-13 08:27:02.689027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.542 [2024-12-13 08:27:02.689117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.542 [2024-12-13 08:27:02.689225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.542 [2024-12-13 08:27:02.689321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.542 [2024-12-13 08:27:02.689414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.542 [2024-12-13 08:27:02.776834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.542 [2024-12-13 08:27:02.776937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.542 [2024-12-13 08:27:02.776970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:50.542 [2024-12-13 08:27:02.776999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.542 [2024-12-13 08:27:02.779196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.542 [2024-12-13 08:27:02.779294] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.542 [2024-12-13 08:27:02.779401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.542 [2024-12-13 08:27:02.779471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.542 pt2 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.542 "name": "raid_bdev1", 00:15:50.542 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:50.542 "strip_size_kb": 64, 00:15:50.542 "state": "configuring", 00:15:50.542 "raid_level": "raid5f", 00:15:50.542 "superblock": true, 00:15:50.542 "num_base_bdevs": 3, 00:15:50.542 "num_base_bdevs_discovered": 1, 00:15:50.542 "num_base_bdevs_operational": 2, 00:15:50.542 "base_bdevs_list": [ 00:15:50.542 { 00:15:50.542 "name": null, 00:15:50.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.542 "is_configured": false, 00:15:50.542 "data_offset": 2048, 00:15:50.542 "data_size": 63488 00:15:50.542 }, 00:15:50.542 { 00:15:50.542 "name": "pt2", 00:15:50.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.542 "is_configured": true, 00:15:50.542 "data_offset": 2048, 00:15:50.542 "data_size": 63488 00:15:50.542 }, 00:15:50.542 { 00:15:50.542 "name": null, 00:15:50.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.542 "is_configured": false, 00:15:50.542 "data_offset": 2048, 00:15:50.542 "data_size": 63488 00:15:50.542 } 00:15:50.542 ] 00:15:50.542 }' 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.542 08:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.111 [2024-12-13 08:27:03.248063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.111 [2024-12-13 08:27:03.248216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.111 [2024-12-13 08:27:03.248276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:51.111 [2024-12-13 08:27:03.248315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.111 [2024-12-13 08:27:03.248854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.111 [2024-12-13 08:27:03.248922] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.111 [2024-12-13 08:27:03.249047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:51.111 [2024-12-13 08:27:03.249117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.111 [2024-12-13 08:27:03.249301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:51.111 [2024-12-13 08:27:03.249347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.111 [2024-12-13 08:27:03.249640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:51.111 [2024-12-13 08:27:03.255253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:51.111 [2024-12-13 08:27:03.255316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:51.111 [2024-12-13 08:27:03.255721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.111 pt3 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.111 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.112 "name": "raid_bdev1", 00:15:51.112 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:51.112 "strip_size_kb": 64, 00:15:51.112 "state": "online", 00:15:51.112 "raid_level": "raid5f", 00:15:51.112 "superblock": true, 00:15:51.112 "num_base_bdevs": 3, 00:15:51.112 "num_base_bdevs_discovered": 2, 00:15:51.112 "num_base_bdevs_operational": 2, 00:15:51.112 "base_bdevs_list": [ 00:15:51.112 { 00:15:51.112 "name": null, 00:15:51.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.112 "is_configured": false, 00:15:51.112 "data_offset": 2048, 00:15:51.112 "data_size": 63488 00:15:51.112 }, 00:15:51.112 { 00:15:51.112 "name": "pt2", 00:15:51.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.112 "is_configured": true, 00:15:51.112 "data_offset": 2048, 00:15:51.112 "data_size": 63488 00:15:51.112 }, 00:15:51.112 { 00:15:51.112 "name": "pt3", 00:15:51.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.112 "is_configured": true, 00:15:51.112 "data_offset": 2048, 00:15:51.112 "data_size": 63488 00:15:51.112 } 00:15:51.112 ] 00:15:51.112 }' 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.112 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.371 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.371 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.371 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.371 [2024-12-13 08:27:03.726792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.371 [2024-12-13 08:27:03.726886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.371 [2024-12-13 08:27:03.727014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.371 [2024-12-13 08:27:03.727106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.371 [2024-12-13 08:27:03.727215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:51.371 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.631 [2024-12-13 08:27:03.802680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.631 [2024-12-13 08:27:03.802782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.631 [2024-12-13 08:27:03.802820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:51.631 [2024-12-13 08:27:03.802848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.631 [2024-12-13 08:27:03.805168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.631 [2024-12-13 08:27:03.805240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.631 [2024-12-13 08:27:03.805343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.631 [2024-12-13 08:27:03.805406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.631 [2024-12-13 08:27:03.805586] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:51.631 [2024-12-13 08:27:03.805643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.631 [2024-12-13 08:27:03.805680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:51.631 [2024-12-13 08:27:03.805768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.631 pt1 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.631 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.631 "name": "raid_bdev1", 00:15:51.631 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:51.631 "strip_size_kb": 64, 00:15:51.631 "state": "configuring", 00:15:51.631 "raid_level": "raid5f", 00:15:51.631 "superblock": true, 00:15:51.631 "num_base_bdevs": 3, 00:15:51.631 "num_base_bdevs_discovered": 1, 00:15:51.631 "num_base_bdevs_operational": 2, 00:15:51.631 "base_bdevs_list": [ 00:15:51.631 { 00:15:51.631 "name": null, 00:15:51.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.631 "is_configured": false, 00:15:51.631 "data_offset": 2048, 00:15:51.631 "data_size": 63488 00:15:51.631 }, 00:15:51.631 { 00:15:51.631 "name": "pt2", 00:15:51.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.632 "is_configured": true, 00:15:51.632 "data_offset": 2048, 00:15:51.632 "data_size": 63488 00:15:51.632 }, 00:15:51.632 { 00:15:51.632 "name": null, 00:15:51.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.632 "is_configured": false, 00:15:51.632 "data_offset": 2048, 00:15:51.632 "data_size": 63488 00:15:51.632 } 00:15:51.632 ] 00:15:51.632 }' 00:15:51.632 08:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.632 08:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.200 [2024-12-13 08:27:04.313867] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:52.200 [2024-12-13 08:27:04.314016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.200 [2024-12-13 08:27:04.314063] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:52.200 [2024-12-13 08:27:04.314109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.200 [2024-12-13 08:27:04.314701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.200 [2024-12-13 08:27:04.314772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:52.200 [2024-12-13 08:27:04.314909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:52.200 [2024-12-13 08:27:04.314971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.200 [2024-12-13 08:27:04.315175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:52.200 [2024-12-13 08:27:04.315224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:52.200 [2024-12-13 08:27:04.315554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:52.200 [2024-12-13 08:27:04.321992] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:52.200 [2024-12-13 08:27:04.322024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:52.200 [2024-12-13 08:27:04.322354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.200 pt3 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.200 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.201 "name": "raid_bdev1", 00:15:52.201 "uuid": "8dd77875-04fa-4b82-98b1-03b37b0a0fa4", 00:15:52.201 "strip_size_kb": 64, 00:15:52.201 "state": "online", 00:15:52.201 "raid_level": "raid5f", 00:15:52.201 "superblock": true, 00:15:52.201 "num_base_bdevs": 3, 00:15:52.201 "num_base_bdevs_discovered": 2, 00:15:52.201 "num_base_bdevs_operational": 2, 00:15:52.201 "base_bdevs_list": [ 00:15:52.201 { 00:15:52.201 "name": null, 00:15:52.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.201 "is_configured": false, 00:15:52.201 "data_offset": 2048, 00:15:52.201 "data_size": 63488 00:15:52.201 }, 00:15:52.201 { 00:15:52.201 "name": "pt2", 00:15:52.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.201 "is_configured": true, 00:15:52.201 "data_offset": 2048, 00:15:52.201 "data_size": 63488 00:15:52.201 }, 00:15:52.201 { 00:15:52.201 "name": "pt3", 00:15:52.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.201 "is_configured": true, 00:15:52.201 "data_offset": 2048, 00:15:52.201 "data_size": 63488 00:15:52.201 } 00:15:52.201 ] 00:15:52.201 }' 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.201 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.460 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:52.460 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.460 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.460 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:52.460 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.720 [2024-12-13 08:27:04.845845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8dd77875-04fa-4b82-98b1-03b37b0a0fa4 '!=' 8dd77875-04fa-4b82-98b1-03b37b0a0fa4 ']' 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81312 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81312 ']' 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81312 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81312 00:15:52.720 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.721 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.721 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81312' 00:15:52.721 killing process with pid 81312 00:15:52.721 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81312 00:15:52.721 [2024-12-13 08:27:04.907386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.721 [2024-12-13 08:27:04.907536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.721 08:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81312 00:15:52.721 [2024-12-13 08:27:04.907638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.721 [2024-12-13 08:27:04.907653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:52.979 [2024-12-13 08:27:05.208380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.380 08:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:54.380 00:15:54.380 real 0m8.026s 00:15:54.380 user 0m12.595s 00:15:54.380 sys 0m1.469s 00:15:54.380 08:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.380 08:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.380 ************************************ 00:15:54.380 END TEST raid5f_superblock_test 00:15:54.380 ************************************ 00:15:54.380 08:27:06 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:54.380 08:27:06 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:54.380 08:27:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:54.380 08:27:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.380 08:27:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.380 ************************************ 00:15:54.380 START TEST raid5f_rebuild_test 00:15:54.380 ************************************ 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81750 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81750 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81750 ']' 00:15:54.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.380 08:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.381 08:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.381 08:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.381 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.381 Zero copy mechanism will not be used. 00:15:54.381 [2024-12-13 08:27:06.504918] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:15:54.381 [2024-12-13 08:27:06.505048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81750 ] 00:15:54.381 [2024-12-13 08:27:06.673651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.640 [2024-12-13 08:27:06.792794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.640 [2024-12-13 08:27:06.997884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.640 [2024-12-13 08:27:06.997922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.208 BaseBdev1_malloc 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.208 [2024-12-13 08:27:07.403193] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.208 [2024-12-13 08:27:07.403317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.208 [2024-12-13 08:27:07.403365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.208 [2024-12-13 08:27:07.403398] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.208 [2024-12-13 08:27:07.405628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.208 [2024-12-13 08:27:07.405710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.208 BaseBdev1 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.208 BaseBdev2_malloc 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.208 [2024-12-13 08:27:07.459709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.208 [2024-12-13 08:27:07.459778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.208 [2024-12-13 08:27:07.459799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.208 [2024-12-13 08:27:07.459811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.208 [2024-12-13 08:27:07.462086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.208 [2024-12-13 08:27:07.462138] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.208 BaseBdev2 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.208 BaseBdev3_malloc 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.208 [2024-12-13 08:27:07.536870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:55.208 [2024-12-13 08:27:07.536987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.208 [2024-12-13 08:27:07.537036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.208 [2024-12-13 08:27:07.537080] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.208 [2024-12-13 08:27:07.539383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.208 [2024-12-13 08:27:07.539461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.208 BaseBdev3 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.208 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.466 spare_malloc 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.466 spare_delay 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.466 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.466 [2024-12-13 08:27:07.606273] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.466 [2024-12-13 08:27:07.606386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.466 [2024-12-13 08:27:07.606427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:55.466 [2024-12-13 08:27:07.606458] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.466 [2024-12-13 08:27:07.608690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.466 [2024-12-13 08:27:07.608772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.466 spare 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.467 [2024-12-13 08:27:07.618321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.467 [2024-12-13 08:27:07.620152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.467 [2024-12-13 08:27:07.620261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.467 [2024-12-13 08:27:07.620369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.467 [2024-12-13 08:27:07.620399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:55.467 [2024-12-13 08:27:07.620682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:55.467 [2024-12-13 08:27:07.626360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.467 [2024-12-13 08:27:07.626419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.467 [2024-12-13 08:27:07.626710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.467 "name": "raid_bdev1", 00:15:55.467 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:15:55.467 "strip_size_kb": 64, 00:15:55.467 "state": "online", 00:15:55.467 "raid_level": "raid5f", 00:15:55.467 "superblock": false, 00:15:55.467 "num_base_bdevs": 3, 00:15:55.467 "num_base_bdevs_discovered": 3, 00:15:55.467 "num_base_bdevs_operational": 3, 00:15:55.467 "base_bdevs_list": [ 00:15:55.467 { 00:15:55.467 "name": "BaseBdev1", 00:15:55.467 "uuid": "b7e4efa5-c0ed-516d-a1df-66ec9e966c3d", 00:15:55.467 "is_configured": true, 00:15:55.467 "data_offset": 0, 00:15:55.467 "data_size": 65536 00:15:55.467 }, 00:15:55.467 { 00:15:55.467 "name": "BaseBdev2", 00:15:55.467 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:15:55.467 "is_configured": true, 00:15:55.467 "data_offset": 0, 00:15:55.467 "data_size": 65536 00:15:55.467 }, 00:15:55.467 { 00:15:55.467 "name": "BaseBdev3", 00:15:55.467 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:15:55.467 "is_configured": true, 00:15:55.467 "data_offset": 0, 00:15:55.467 "data_size": 65536 00:15:55.467 } 00:15:55.467 ] 00:15:55.467 }' 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.467 08:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.725 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:55.725 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.725 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.725 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.725 [2024-12-13 08:27:08.057356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.725 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.984 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:56.242 [2024-12-13 08:27:08.348674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:56.242 /dev/nbd0 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.242 1+0 records in 00:15:56.242 1+0 records out 00:15:56.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350892 s, 11.7 MB/s 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:56.242 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:56.500 512+0 records in 00:15:56.500 512+0 records out 00:15:56.500 67108864 bytes (67 MB, 64 MiB) copied, 0.365736 s, 183 MB/s 00:15:56.500 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:56.500 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.500 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:56.500 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.500 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:56.501 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.501 08:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.759 [2024-12-13 08:27:08.989812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.759 [2024-12-13 08:27:09.030164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.759 "name": "raid_bdev1", 00:15:56.759 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:15:56.759 "strip_size_kb": 64, 00:15:56.759 "state": "online", 00:15:56.759 "raid_level": "raid5f", 00:15:56.759 "superblock": false, 00:15:56.759 "num_base_bdevs": 3, 00:15:56.759 "num_base_bdevs_discovered": 2, 00:15:56.759 "num_base_bdevs_operational": 2, 00:15:56.759 "base_bdevs_list": [ 00:15:56.759 { 00:15:56.759 "name": null, 00:15:56.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.759 "is_configured": false, 00:15:56.759 "data_offset": 0, 00:15:56.759 "data_size": 65536 00:15:56.759 }, 00:15:56.759 { 00:15:56.759 "name": "BaseBdev2", 00:15:56.759 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:15:56.759 "is_configured": true, 00:15:56.759 "data_offset": 0, 00:15:56.759 "data_size": 65536 00:15:56.759 }, 00:15:56.759 { 00:15:56.759 "name": "BaseBdev3", 00:15:56.759 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:15:56.759 "is_configured": true, 00:15:56.759 "data_offset": 0, 00:15:56.759 "data_size": 65536 00:15:56.759 } 00:15:56.759 ] 00:15:56.759 }' 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.759 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.327 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.327 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.327 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.327 [2024-12-13 08:27:09.485382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.327 [2024-12-13 08:27:09.503775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:57.327 08:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.327 08:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.327 [2024-12-13 08:27:09.512264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.265 "name": "raid_bdev1", 00:15:58.265 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:15:58.265 "strip_size_kb": 64, 00:15:58.265 "state": "online", 00:15:58.265 "raid_level": "raid5f", 00:15:58.265 "superblock": false, 00:15:58.265 "num_base_bdevs": 3, 00:15:58.265 "num_base_bdevs_discovered": 3, 00:15:58.265 "num_base_bdevs_operational": 3, 00:15:58.265 "process": { 00:15:58.265 "type": "rebuild", 00:15:58.265 "target": "spare", 00:15:58.265 "progress": { 00:15:58.265 "blocks": 20480, 00:15:58.265 "percent": 15 00:15:58.265 } 00:15:58.265 }, 00:15:58.265 "base_bdevs_list": [ 00:15:58.265 { 00:15:58.265 "name": "spare", 00:15:58.265 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:15:58.265 "is_configured": true, 00:15:58.265 "data_offset": 0, 00:15:58.265 "data_size": 65536 00:15:58.265 }, 00:15:58.265 { 00:15:58.265 "name": "BaseBdev2", 00:15:58.265 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:15:58.265 "is_configured": true, 00:15:58.265 "data_offset": 0, 00:15:58.265 "data_size": 65536 00:15:58.265 }, 00:15:58.265 { 00:15:58.265 "name": "BaseBdev3", 00:15:58.265 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:15:58.265 "is_configured": true, 00:15:58.265 "data_offset": 0, 00:15:58.265 "data_size": 65536 00:15:58.265 } 00:15:58.265 ] 00:15:58.265 }' 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.265 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.266 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.266 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.266 [2024-12-13 08:27:10.623376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.525 [2024-12-13 08:27:10.723765] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.525 [2024-12-13 08:27:10.723949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.525 [2024-12-13 08:27:10.723991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.525 [2024-12-13 08:27:10.724015] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.525 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.525 "name": "raid_bdev1", 00:15:58.525 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:15:58.525 "strip_size_kb": 64, 00:15:58.525 "state": "online", 00:15:58.525 "raid_level": "raid5f", 00:15:58.525 "superblock": false, 00:15:58.525 "num_base_bdevs": 3, 00:15:58.526 "num_base_bdevs_discovered": 2, 00:15:58.526 "num_base_bdevs_operational": 2, 00:15:58.526 "base_bdevs_list": [ 00:15:58.526 { 00:15:58.526 "name": null, 00:15:58.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.526 "is_configured": false, 00:15:58.526 "data_offset": 0, 00:15:58.526 "data_size": 65536 00:15:58.526 }, 00:15:58.526 { 00:15:58.526 "name": "BaseBdev2", 00:15:58.526 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:15:58.526 "is_configured": true, 00:15:58.526 "data_offset": 0, 00:15:58.526 "data_size": 65536 00:15:58.526 }, 00:15:58.526 { 00:15:58.526 "name": "BaseBdev3", 00:15:58.526 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:15:58.526 "is_configured": true, 00:15:58.526 "data_offset": 0, 00:15:58.526 "data_size": 65536 00:15:58.526 } 00:15:58.526 ] 00:15:58.526 }' 00:15:58.526 08:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.526 08:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.102 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.102 "name": "raid_bdev1", 00:15:59.103 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:15:59.103 "strip_size_kb": 64, 00:15:59.103 "state": "online", 00:15:59.103 "raid_level": "raid5f", 00:15:59.103 "superblock": false, 00:15:59.103 "num_base_bdevs": 3, 00:15:59.103 "num_base_bdevs_discovered": 2, 00:15:59.103 "num_base_bdevs_operational": 2, 00:15:59.103 "base_bdevs_list": [ 00:15:59.103 { 00:15:59.103 "name": null, 00:15:59.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.103 "is_configured": false, 00:15:59.103 "data_offset": 0, 00:15:59.103 "data_size": 65536 00:15:59.103 }, 00:15:59.103 { 00:15:59.103 "name": "BaseBdev2", 00:15:59.103 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:15:59.103 "is_configured": true, 00:15:59.103 "data_offset": 0, 00:15:59.103 "data_size": 65536 00:15:59.103 }, 00:15:59.103 { 00:15:59.103 "name": "BaseBdev3", 00:15:59.103 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:15:59.103 "is_configured": true, 00:15:59.103 "data_offset": 0, 00:15:59.103 "data_size": 65536 00:15:59.103 } 00:15:59.103 ] 00:15:59.103 }' 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.103 [2024-12-13 08:27:11.323672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.103 [2024-12-13 08:27:11.339182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.103 08:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.103 [2024-12-13 08:27:11.347154] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.038 "name": "raid_bdev1", 00:16:00.038 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:00.038 "strip_size_kb": 64, 00:16:00.038 "state": "online", 00:16:00.038 "raid_level": "raid5f", 00:16:00.038 "superblock": false, 00:16:00.038 "num_base_bdevs": 3, 00:16:00.038 "num_base_bdevs_discovered": 3, 00:16:00.038 "num_base_bdevs_operational": 3, 00:16:00.038 "process": { 00:16:00.038 "type": "rebuild", 00:16:00.038 "target": "spare", 00:16:00.038 "progress": { 00:16:00.038 "blocks": 20480, 00:16:00.038 "percent": 15 00:16:00.038 } 00:16:00.038 }, 00:16:00.038 "base_bdevs_list": [ 00:16:00.038 { 00:16:00.038 "name": "spare", 00:16:00.038 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:00.038 "is_configured": true, 00:16:00.038 "data_offset": 0, 00:16:00.038 "data_size": 65536 00:16:00.038 }, 00:16:00.038 { 00:16:00.038 "name": "BaseBdev2", 00:16:00.038 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:00.038 "is_configured": true, 00:16:00.038 "data_offset": 0, 00:16:00.038 "data_size": 65536 00:16:00.038 }, 00:16:00.038 { 00:16:00.038 "name": "BaseBdev3", 00:16:00.038 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:00.038 "is_configured": true, 00:16:00.038 "data_offset": 0, 00:16:00.038 "data_size": 65536 00:16:00.038 } 00:16:00.038 ] 00:16:00.038 }' 00:16:00.038 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=553 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.297 "name": "raid_bdev1", 00:16:00.297 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:00.297 "strip_size_kb": 64, 00:16:00.297 "state": "online", 00:16:00.297 "raid_level": "raid5f", 00:16:00.297 "superblock": false, 00:16:00.297 "num_base_bdevs": 3, 00:16:00.297 "num_base_bdevs_discovered": 3, 00:16:00.297 "num_base_bdevs_operational": 3, 00:16:00.297 "process": { 00:16:00.297 "type": "rebuild", 00:16:00.297 "target": "spare", 00:16:00.297 "progress": { 00:16:00.297 "blocks": 22528, 00:16:00.297 "percent": 17 00:16:00.297 } 00:16:00.297 }, 00:16:00.297 "base_bdevs_list": [ 00:16:00.297 { 00:16:00.297 "name": "spare", 00:16:00.297 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:00.297 "is_configured": true, 00:16:00.297 "data_offset": 0, 00:16:00.297 "data_size": 65536 00:16:00.297 }, 00:16:00.297 { 00:16:00.297 "name": "BaseBdev2", 00:16:00.297 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:00.297 "is_configured": true, 00:16:00.297 "data_offset": 0, 00:16:00.297 "data_size": 65536 00:16:00.297 }, 00:16:00.297 { 00:16:00.297 "name": "BaseBdev3", 00:16:00.297 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:00.297 "is_configured": true, 00:16:00.297 "data_offset": 0, 00:16:00.297 "data_size": 65536 00:16:00.297 } 00:16:00.297 ] 00:16:00.297 }' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.297 08:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.673 "name": "raid_bdev1", 00:16:01.673 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:01.673 "strip_size_kb": 64, 00:16:01.673 "state": "online", 00:16:01.673 "raid_level": "raid5f", 00:16:01.673 "superblock": false, 00:16:01.673 "num_base_bdevs": 3, 00:16:01.673 "num_base_bdevs_discovered": 3, 00:16:01.673 "num_base_bdevs_operational": 3, 00:16:01.673 "process": { 00:16:01.673 "type": "rebuild", 00:16:01.673 "target": "spare", 00:16:01.673 "progress": { 00:16:01.673 "blocks": 47104, 00:16:01.673 "percent": 35 00:16:01.673 } 00:16:01.673 }, 00:16:01.673 "base_bdevs_list": [ 00:16:01.673 { 00:16:01.673 "name": "spare", 00:16:01.673 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:01.673 "is_configured": true, 00:16:01.673 "data_offset": 0, 00:16:01.673 "data_size": 65536 00:16:01.673 }, 00:16:01.673 { 00:16:01.673 "name": "BaseBdev2", 00:16:01.673 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:01.673 "is_configured": true, 00:16:01.673 "data_offset": 0, 00:16:01.673 "data_size": 65536 00:16:01.673 }, 00:16:01.673 { 00:16:01.673 "name": "BaseBdev3", 00:16:01.673 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:01.673 "is_configured": true, 00:16:01.673 "data_offset": 0, 00:16:01.673 "data_size": 65536 00:16:01.673 } 00:16:01.673 ] 00:16:01.673 }' 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.673 08:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.610 "name": "raid_bdev1", 00:16:02.610 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:02.610 "strip_size_kb": 64, 00:16:02.610 "state": "online", 00:16:02.610 "raid_level": "raid5f", 00:16:02.610 "superblock": false, 00:16:02.610 "num_base_bdevs": 3, 00:16:02.610 "num_base_bdevs_discovered": 3, 00:16:02.610 "num_base_bdevs_operational": 3, 00:16:02.610 "process": { 00:16:02.610 "type": "rebuild", 00:16:02.610 "target": "spare", 00:16:02.610 "progress": { 00:16:02.610 "blocks": 69632, 00:16:02.610 "percent": 53 00:16:02.610 } 00:16:02.610 }, 00:16:02.610 "base_bdevs_list": [ 00:16:02.610 { 00:16:02.610 "name": "spare", 00:16:02.610 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:02.610 "is_configured": true, 00:16:02.610 "data_offset": 0, 00:16:02.610 "data_size": 65536 00:16:02.610 }, 00:16:02.610 { 00:16:02.610 "name": "BaseBdev2", 00:16:02.610 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:02.610 "is_configured": true, 00:16:02.610 "data_offset": 0, 00:16:02.610 "data_size": 65536 00:16:02.610 }, 00:16:02.610 { 00:16:02.610 "name": "BaseBdev3", 00:16:02.610 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:02.610 "is_configured": true, 00:16:02.610 "data_offset": 0, 00:16:02.610 "data_size": 65536 00:16:02.610 } 00:16:02.610 ] 00:16:02.610 }' 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.610 08:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.554 08:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.814 08:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.814 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.814 "name": "raid_bdev1", 00:16:03.814 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:03.814 "strip_size_kb": 64, 00:16:03.814 "state": "online", 00:16:03.814 "raid_level": "raid5f", 00:16:03.814 "superblock": false, 00:16:03.814 "num_base_bdevs": 3, 00:16:03.814 "num_base_bdevs_discovered": 3, 00:16:03.814 "num_base_bdevs_operational": 3, 00:16:03.814 "process": { 00:16:03.814 "type": "rebuild", 00:16:03.814 "target": "spare", 00:16:03.814 "progress": { 00:16:03.814 "blocks": 92160, 00:16:03.814 "percent": 70 00:16:03.814 } 00:16:03.814 }, 00:16:03.814 "base_bdevs_list": [ 00:16:03.814 { 00:16:03.814 "name": "spare", 00:16:03.814 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:03.814 "is_configured": true, 00:16:03.814 "data_offset": 0, 00:16:03.814 "data_size": 65536 00:16:03.814 }, 00:16:03.814 { 00:16:03.814 "name": "BaseBdev2", 00:16:03.814 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:03.814 "is_configured": true, 00:16:03.814 "data_offset": 0, 00:16:03.814 "data_size": 65536 00:16:03.814 }, 00:16:03.814 { 00:16:03.814 "name": "BaseBdev3", 00:16:03.814 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:03.814 "is_configured": true, 00:16:03.814 "data_offset": 0, 00:16:03.814 "data_size": 65536 00:16:03.814 } 00:16:03.814 ] 00:16:03.814 }' 00:16:03.814 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.814 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.814 08:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.814 08:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.814 08:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.753 "name": "raid_bdev1", 00:16:04.753 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:04.753 "strip_size_kb": 64, 00:16:04.753 "state": "online", 00:16:04.753 "raid_level": "raid5f", 00:16:04.753 "superblock": false, 00:16:04.753 "num_base_bdevs": 3, 00:16:04.753 "num_base_bdevs_discovered": 3, 00:16:04.753 "num_base_bdevs_operational": 3, 00:16:04.753 "process": { 00:16:04.753 "type": "rebuild", 00:16:04.753 "target": "spare", 00:16:04.753 "progress": { 00:16:04.753 "blocks": 114688, 00:16:04.753 "percent": 87 00:16:04.753 } 00:16:04.753 }, 00:16:04.753 "base_bdevs_list": [ 00:16:04.753 { 00:16:04.753 "name": "spare", 00:16:04.753 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:04.753 "is_configured": true, 00:16:04.753 "data_offset": 0, 00:16:04.753 "data_size": 65536 00:16:04.753 }, 00:16:04.753 { 00:16:04.753 "name": "BaseBdev2", 00:16:04.753 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:04.753 "is_configured": true, 00:16:04.753 "data_offset": 0, 00:16:04.753 "data_size": 65536 00:16:04.753 }, 00:16:04.753 { 00:16:04.753 "name": "BaseBdev3", 00:16:04.753 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:04.753 "is_configured": true, 00:16:04.753 "data_offset": 0, 00:16:04.753 "data_size": 65536 00:16:04.753 } 00:16:04.753 ] 00:16:04.753 }' 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.753 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.013 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.013 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.013 08:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.581 [2024-12-13 08:27:17.793402] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:05.581 [2024-12-13 08:27:17.793545] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:05.581 [2024-12-13 08:27:17.793612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.841 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.100 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.100 "name": "raid_bdev1", 00:16:06.100 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:06.100 "strip_size_kb": 64, 00:16:06.100 "state": "online", 00:16:06.100 "raid_level": "raid5f", 00:16:06.100 "superblock": false, 00:16:06.100 "num_base_bdevs": 3, 00:16:06.100 "num_base_bdevs_discovered": 3, 00:16:06.100 "num_base_bdevs_operational": 3, 00:16:06.100 "base_bdevs_list": [ 00:16:06.100 { 00:16:06.100 "name": "spare", 00:16:06.101 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:06.101 "is_configured": true, 00:16:06.101 "data_offset": 0, 00:16:06.101 "data_size": 65536 00:16:06.101 }, 00:16:06.101 { 00:16:06.101 "name": "BaseBdev2", 00:16:06.101 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:06.101 "is_configured": true, 00:16:06.101 "data_offset": 0, 00:16:06.101 "data_size": 65536 00:16:06.101 }, 00:16:06.101 { 00:16:06.101 "name": "BaseBdev3", 00:16:06.101 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:06.101 "is_configured": true, 00:16:06.101 "data_offset": 0, 00:16:06.101 "data_size": 65536 00:16:06.101 } 00:16:06.101 ] 00:16:06.101 }' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.101 "name": "raid_bdev1", 00:16:06.101 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:06.101 "strip_size_kb": 64, 00:16:06.101 "state": "online", 00:16:06.101 "raid_level": "raid5f", 00:16:06.101 "superblock": false, 00:16:06.101 "num_base_bdevs": 3, 00:16:06.101 "num_base_bdevs_discovered": 3, 00:16:06.101 "num_base_bdevs_operational": 3, 00:16:06.101 "base_bdevs_list": [ 00:16:06.101 { 00:16:06.101 "name": "spare", 00:16:06.101 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:06.101 "is_configured": true, 00:16:06.101 "data_offset": 0, 00:16:06.101 "data_size": 65536 00:16:06.101 }, 00:16:06.101 { 00:16:06.101 "name": "BaseBdev2", 00:16:06.101 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:06.101 "is_configured": true, 00:16:06.101 "data_offset": 0, 00:16:06.101 "data_size": 65536 00:16:06.101 }, 00:16:06.101 { 00:16:06.101 "name": "BaseBdev3", 00:16:06.101 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:06.101 "is_configured": true, 00:16:06.101 "data_offset": 0, 00:16:06.101 "data_size": 65536 00:16:06.101 } 00:16:06.101 ] 00:16:06.101 }' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.101 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.360 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.360 "name": "raid_bdev1", 00:16:06.360 "uuid": "41eb844a-1b8e-4762-804d-98abf9147230", 00:16:06.360 "strip_size_kb": 64, 00:16:06.360 "state": "online", 00:16:06.360 "raid_level": "raid5f", 00:16:06.360 "superblock": false, 00:16:06.360 "num_base_bdevs": 3, 00:16:06.360 "num_base_bdevs_discovered": 3, 00:16:06.360 "num_base_bdevs_operational": 3, 00:16:06.360 "base_bdevs_list": [ 00:16:06.360 { 00:16:06.360 "name": "spare", 00:16:06.360 "uuid": "e308cafd-886b-5634-a1a8-5654b5c3c915", 00:16:06.360 "is_configured": true, 00:16:06.360 "data_offset": 0, 00:16:06.360 "data_size": 65536 00:16:06.360 }, 00:16:06.360 { 00:16:06.360 "name": "BaseBdev2", 00:16:06.360 "uuid": "7beb13d1-25e8-5cfa-8092-cc46401a9542", 00:16:06.360 "is_configured": true, 00:16:06.360 "data_offset": 0, 00:16:06.360 "data_size": 65536 00:16:06.360 }, 00:16:06.360 { 00:16:06.360 "name": "BaseBdev3", 00:16:06.360 "uuid": "84b8f643-0a7b-54bc-bf0b-24964810369f", 00:16:06.360 "is_configured": true, 00:16:06.360 "data_offset": 0, 00:16:06.360 "data_size": 65536 00:16:06.360 } 00:16:06.360 ] 00:16:06.360 }' 00:16:06.360 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.360 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.620 [2024-12-13 08:27:18.903431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.620 [2024-12-13 08:27:18.903504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.620 [2024-12-13 08:27:18.903612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.620 [2024-12-13 08:27:18.903730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.620 [2024-12-13 08:27:18.903797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.620 08:27:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:06.880 /dev/nbd0 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.880 1+0 records in 00:16:06.880 1+0 records out 00:16:06.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045473 s, 9.0 MB/s 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.880 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:07.140 /dev/nbd1 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.140 1+0 records in 00:16:07.140 1+0 records out 00:16:07.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532798 s, 7.7 MB/s 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.140 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:07.399 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:07.399 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.399 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.399 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.399 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:07.399 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.399 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.659 08:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81750 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81750 ']' 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81750 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81750 00:16:07.918 killing process with pid 81750 00:16:07.918 Received shutdown signal, test time was about 60.000000 seconds 00:16:07.918 00:16:07.918 Latency(us) 00:16:07.918 [2024-12-13T08:27:20.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.918 [2024-12-13T08:27:20.283Z] =================================================================================================================== 00:16:07.918 [2024-12-13T08:27:20.283Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81750' 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81750 00:16:07.918 [2024-12-13 08:27:20.131129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.918 08:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81750 00:16:08.177 [2024-12-13 08:27:20.528676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.556 08:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:09.556 00:16:09.556 real 0m15.230s 00:16:09.556 user 0m18.610s 00:16:09.556 sys 0m2.026s 00:16:09.556 08:27:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.556 ************************************ 00:16:09.557 END TEST raid5f_rebuild_test 00:16:09.557 ************************************ 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.557 08:27:21 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:09.557 08:27:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:09.557 08:27:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.557 08:27:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.557 ************************************ 00:16:09.557 START TEST raid5f_rebuild_test_sb 00:16:09.557 ************************************ 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82201 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82201 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82201 ']' 00:16:09.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.557 08:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.557 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:09.557 Zero copy mechanism will not be used. 00:16:09.557 [2024-12-13 08:27:21.811671] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:16:09.557 [2024-12-13 08:27:21.811784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82201 ] 00:16:09.816 [2024-12-13 08:27:21.987327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.816 [2024-12-13 08:27:22.102829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.076 [2024-12-13 08:27:22.296529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.076 [2024-12-13 08:27:22.296584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.335 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.335 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:10.335 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.335 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.335 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.335 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.595 BaseBdev1_malloc 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.595 [2024-12-13 08:27:22.705318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.595 [2024-12-13 08:27:22.705379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.595 [2024-12-13 08:27:22.705402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.595 [2024-12-13 08:27:22.705413] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.595 [2024-12-13 08:27:22.707479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.595 [2024-12-13 08:27:22.707522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.595 BaseBdev1 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.595 BaseBdev2_malloc 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.595 [2024-12-13 08:27:22.761729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:10.595 [2024-12-13 08:27:22.761859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.595 [2024-12-13 08:27:22.761886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:10.595 [2024-12-13 08:27:22.761901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.595 [2024-12-13 08:27:22.764004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.595 [2024-12-13 08:27:22.764044] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.595 BaseBdev2 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.595 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.596 BaseBdev3_malloc 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.596 [2024-12-13 08:27:22.832903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:10.596 [2024-12-13 08:27:22.833003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.596 [2024-12-13 08:27:22.833045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:10.596 [2024-12-13 08:27:22.833057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.596 [2024-12-13 08:27:22.835136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.596 [2024-12-13 08:27:22.835175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:10.596 BaseBdev3 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.596 spare_malloc 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.596 spare_delay 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.596 [2024-12-13 08:27:22.901654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.596 [2024-12-13 08:27:22.901708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.596 [2024-12-13 08:27:22.901728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:10.596 [2024-12-13 08:27:22.901738] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.596 [2024-12-13 08:27:22.903820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.596 [2024-12-13 08:27:22.903864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.596 spare 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.596 [2024-12-13 08:27:22.913702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.596 [2024-12-13 08:27:22.915467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.596 [2024-12-13 08:27:22.915536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.596 [2024-12-13 08:27:22.915715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:10.596 [2024-12-13 08:27:22.915733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:10.596 [2024-12-13 08:27:22.915973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:10.596 [2024-12-13 08:27:22.921726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:10.596 [2024-12-13 08:27:22.921789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:10.596 [2024-12-13 08:27:22.922032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.596 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.856 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.856 "name": "raid_bdev1", 00:16:10.856 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:10.856 "strip_size_kb": 64, 00:16:10.856 "state": "online", 00:16:10.856 "raid_level": "raid5f", 00:16:10.856 "superblock": true, 00:16:10.856 "num_base_bdevs": 3, 00:16:10.856 "num_base_bdevs_discovered": 3, 00:16:10.856 "num_base_bdevs_operational": 3, 00:16:10.856 "base_bdevs_list": [ 00:16:10.856 { 00:16:10.856 "name": "BaseBdev1", 00:16:10.856 "uuid": "3b6f907d-b3d1-5b40-a7e2-5fc8907662fd", 00:16:10.856 "is_configured": true, 00:16:10.856 "data_offset": 2048, 00:16:10.856 "data_size": 63488 00:16:10.856 }, 00:16:10.856 { 00:16:10.856 "name": "BaseBdev2", 00:16:10.856 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:10.856 "is_configured": true, 00:16:10.856 "data_offset": 2048, 00:16:10.856 "data_size": 63488 00:16:10.856 }, 00:16:10.856 { 00:16:10.856 "name": "BaseBdev3", 00:16:10.856 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:10.856 "is_configured": true, 00:16:10.856 "data_offset": 2048, 00:16:10.856 "data_size": 63488 00:16:10.856 } 00:16:10.856 ] 00:16:10.856 }' 00:16:10.856 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.856 08:27:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.115 [2024-12-13 08:27:23.407974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.115 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:11.116 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:11.375 [2024-12-13 08:27:23.651443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:11.375 /dev/nbd0 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.375 1+0 records in 00:16:11.375 1+0 records out 00:16:11.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474349 s, 8.6 MB/s 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:11.375 08:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:11.943 496+0 records in 00:16:11.943 496+0 records out 00:16:11.943 65011712 bytes (65 MB, 62 MiB) copied, 0.358556 s, 181 MB/s 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:11.943 [2024-12-13 08:27:24.282850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.943 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.202 [2024-12-13 08:27:24.319024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.202 "name": "raid_bdev1", 00:16:12.202 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:12.202 "strip_size_kb": 64, 00:16:12.202 "state": "online", 00:16:12.202 "raid_level": "raid5f", 00:16:12.202 "superblock": true, 00:16:12.202 "num_base_bdevs": 3, 00:16:12.202 "num_base_bdevs_discovered": 2, 00:16:12.202 "num_base_bdevs_operational": 2, 00:16:12.202 "base_bdevs_list": [ 00:16:12.202 { 00:16:12.202 "name": null, 00:16:12.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.202 "is_configured": false, 00:16:12.202 "data_offset": 0, 00:16:12.202 "data_size": 63488 00:16:12.202 }, 00:16:12.202 { 00:16:12.202 "name": "BaseBdev2", 00:16:12.202 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:12.202 "is_configured": true, 00:16:12.202 "data_offset": 2048, 00:16:12.202 "data_size": 63488 00:16:12.202 }, 00:16:12.202 { 00:16:12.202 "name": "BaseBdev3", 00:16:12.202 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:12.202 "is_configured": true, 00:16:12.202 "data_offset": 2048, 00:16:12.202 "data_size": 63488 00:16:12.202 } 00:16:12.202 ] 00:16:12.202 }' 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.202 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.461 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.461 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.461 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.461 [2024-12-13 08:27:24.738303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.461 [2024-12-13 08:27:24.755144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:12.461 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.461 08:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.461 [2024-12-13 08:27:24.762736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.838 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.838 "name": "raid_bdev1", 00:16:13.838 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:13.838 "strip_size_kb": 64, 00:16:13.838 "state": "online", 00:16:13.838 "raid_level": "raid5f", 00:16:13.838 "superblock": true, 00:16:13.838 "num_base_bdevs": 3, 00:16:13.838 "num_base_bdevs_discovered": 3, 00:16:13.838 "num_base_bdevs_operational": 3, 00:16:13.838 "process": { 00:16:13.838 "type": "rebuild", 00:16:13.838 "target": "spare", 00:16:13.838 "progress": { 00:16:13.838 "blocks": 20480, 00:16:13.838 "percent": 16 00:16:13.838 } 00:16:13.838 }, 00:16:13.838 "base_bdevs_list": [ 00:16:13.838 { 00:16:13.838 "name": "spare", 00:16:13.838 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:13.838 "is_configured": true, 00:16:13.839 "data_offset": 2048, 00:16:13.839 "data_size": 63488 00:16:13.839 }, 00:16:13.839 { 00:16:13.839 "name": "BaseBdev2", 00:16:13.839 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:13.839 "is_configured": true, 00:16:13.839 "data_offset": 2048, 00:16:13.839 "data_size": 63488 00:16:13.839 }, 00:16:13.839 { 00:16:13.839 "name": "BaseBdev3", 00:16:13.839 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:13.839 "is_configured": true, 00:16:13.839 "data_offset": 2048, 00:16:13.839 "data_size": 63488 00:16:13.839 } 00:16:13.839 ] 00:16:13.839 }' 00:16:13.839 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.839 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.839 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.839 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.839 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.839 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.839 08:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.839 [2024-12-13 08:27:25.913791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.839 [2024-12-13 08:27:25.972070] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.839 [2024-12-13 08:27:25.972165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.839 [2024-12-13 08:27:25.972188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.839 [2024-12-13 08:27:25.972197] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.839 "name": "raid_bdev1", 00:16:13.839 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:13.839 "strip_size_kb": 64, 00:16:13.839 "state": "online", 00:16:13.839 "raid_level": "raid5f", 00:16:13.839 "superblock": true, 00:16:13.839 "num_base_bdevs": 3, 00:16:13.839 "num_base_bdevs_discovered": 2, 00:16:13.839 "num_base_bdevs_operational": 2, 00:16:13.839 "base_bdevs_list": [ 00:16:13.839 { 00:16:13.839 "name": null, 00:16:13.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.839 "is_configured": false, 00:16:13.839 "data_offset": 0, 00:16:13.839 "data_size": 63488 00:16:13.839 }, 00:16:13.839 { 00:16:13.839 "name": "BaseBdev2", 00:16:13.839 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:13.839 "is_configured": true, 00:16:13.839 "data_offset": 2048, 00:16:13.839 "data_size": 63488 00:16:13.839 }, 00:16:13.839 { 00:16:13.839 "name": "BaseBdev3", 00:16:13.839 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:13.839 "is_configured": true, 00:16:13.839 "data_offset": 2048, 00:16:13.839 "data_size": 63488 00:16:13.839 } 00:16:13.839 ] 00:16:13.839 }' 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.839 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.097 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.354 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.354 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.354 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.354 "name": "raid_bdev1", 00:16:14.354 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:14.354 "strip_size_kb": 64, 00:16:14.354 "state": "online", 00:16:14.354 "raid_level": "raid5f", 00:16:14.354 "superblock": true, 00:16:14.354 "num_base_bdevs": 3, 00:16:14.354 "num_base_bdevs_discovered": 2, 00:16:14.354 "num_base_bdevs_operational": 2, 00:16:14.354 "base_bdevs_list": [ 00:16:14.354 { 00:16:14.354 "name": null, 00:16:14.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.354 "is_configured": false, 00:16:14.354 "data_offset": 0, 00:16:14.354 "data_size": 63488 00:16:14.354 }, 00:16:14.354 { 00:16:14.354 "name": "BaseBdev2", 00:16:14.354 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:14.354 "is_configured": true, 00:16:14.354 "data_offset": 2048, 00:16:14.354 "data_size": 63488 00:16:14.354 }, 00:16:14.354 { 00:16:14.354 "name": "BaseBdev3", 00:16:14.354 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:14.354 "is_configured": true, 00:16:14.354 "data_offset": 2048, 00:16:14.354 "data_size": 63488 00:16:14.354 } 00:16:14.355 ] 00:16:14.355 }' 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.355 [2024-12-13 08:27:26.597230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.355 [2024-12-13 08:27:26.613435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.355 08:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.355 [2024-12-13 08:27:26.621032] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.292 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.552 "name": "raid_bdev1", 00:16:15.552 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:15.552 "strip_size_kb": 64, 00:16:15.552 "state": "online", 00:16:15.552 "raid_level": "raid5f", 00:16:15.552 "superblock": true, 00:16:15.552 "num_base_bdevs": 3, 00:16:15.552 "num_base_bdevs_discovered": 3, 00:16:15.552 "num_base_bdevs_operational": 3, 00:16:15.552 "process": { 00:16:15.552 "type": "rebuild", 00:16:15.552 "target": "spare", 00:16:15.552 "progress": { 00:16:15.552 "blocks": 20480, 00:16:15.552 "percent": 16 00:16:15.552 } 00:16:15.552 }, 00:16:15.552 "base_bdevs_list": [ 00:16:15.552 { 00:16:15.552 "name": "spare", 00:16:15.552 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:15.552 "is_configured": true, 00:16:15.552 "data_offset": 2048, 00:16:15.552 "data_size": 63488 00:16:15.552 }, 00:16:15.552 { 00:16:15.552 "name": "BaseBdev2", 00:16:15.552 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:15.552 "is_configured": true, 00:16:15.552 "data_offset": 2048, 00:16:15.552 "data_size": 63488 00:16:15.552 }, 00:16:15.552 { 00:16:15.552 "name": "BaseBdev3", 00:16:15.552 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:15.552 "is_configured": true, 00:16:15.552 "data_offset": 2048, 00:16:15.552 "data_size": 63488 00:16:15.552 } 00:16:15.552 ] 00:16:15.552 }' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:15.552 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=568 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.552 "name": "raid_bdev1", 00:16:15.552 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:15.552 "strip_size_kb": 64, 00:16:15.552 "state": "online", 00:16:15.552 "raid_level": "raid5f", 00:16:15.552 "superblock": true, 00:16:15.552 "num_base_bdevs": 3, 00:16:15.552 "num_base_bdevs_discovered": 3, 00:16:15.552 "num_base_bdevs_operational": 3, 00:16:15.552 "process": { 00:16:15.552 "type": "rebuild", 00:16:15.552 "target": "spare", 00:16:15.552 "progress": { 00:16:15.552 "blocks": 22528, 00:16:15.552 "percent": 17 00:16:15.552 } 00:16:15.552 }, 00:16:15.552 "base_bdevs_list": [ 00:16:15.552 { 00:16:15.552 "name": "spare", 00:16:15.552 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:15.552 "is_configured": true, 00:16:15.552 "data_offset": 2048, 00:16:15.552 "data_size": 63488 00:16:15.552 }, 00:16:15.552 { 00:16:15.552 "name": "BaseBdev2", 00:16:15.552 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:15.552 "is_configured": true, 00:16:15.552 "data_offset": 2048, 00:16:15.552 "data_size": 63488 00:16:15.552 }, 00:16:15.552 { 00:16:15.552 "name": "BaseBdev3", 00:16:15.552 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:15.552 "is_configured": true, 00:16:15.552 "data_offset": 2048, 00:16:15.552 "data_size": 63488 00:16:15.552 } 00:16:15.552 ] 00:16:15.552 }' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.552 08:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.933 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.933 "name": "raid_bdev1", 00:16:16.933 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:16.933 "strip_size_kb": 64, 00:16:16.933 "state": "online", 00:16:16.933 "raid_level": "raid5f", 00:16:16.933 "superblock": true, 00:16:16.933 "num_base_bdevs": 3, 00:16:16.933 "num_base_bdevs_discovered": 3, 00:16:16.933 "num_base_bdevs_operational": 3, 00:16:16.933 "process": { 00:16:16.933 "type": "rebuild", 00:16:16.933 "target": "spare", 00:16:16.933 "progress": { 00:16:16.933 "blocks": 45056, 00:16:16.934 "percent": 35 00:16:16.934 } 00:16:16.934 }, 00:16:16.934 "base_bdevs_list": [ 00:16:16.934 { 00:16:16.934 "name": "spare", 00:16:16.934 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:16.934 "is_configured": true, 00:16:16.934 "data_offset": 2048, 00:16:16.934 "data_size": 63488 00:16:16.934 }, 00:16:16.934 { 00:16:16.934 "name": "BaseBdev2", 00:16:16.934 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:16.934 "is_configured": true, 00:16:16.934 "data_offset": 2048, 00:16:16.934 "data_size": 63488 00:16:16.934 }, 00:16:16.934 { 00:16:16.934 "name": "BaseBdev3", 00:16:16.934 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:16.934 "is_configured": true, 00:16:16.934 "data_offset": 2048, 00:16:16.934 "data_size": 63488 00:16:16.934 } 00:16:16.934 ] 00:16:16.934 }' 00:16:16.934 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.934 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.934 08:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.934 08:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.934 08:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.873 "name": "raid_bdev1", 00:16:17.873 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:17.873 "strip_size_kb": 64, 00:16:17.873 "state": "online", 00:16:17.873 "raid_level": "raid5f", 00:16:17.873 "superblock": true, 00:16:17.873 "num_base_bdevs": 3, 00:16:17.873 "num_base_bdevs_discovered": 3, 00:16:17.873 "num_base_bdevs_operational": 3, 00:16:17.873 "process": { 00:16:17.873 "type": "rebuild", 00:16:17.873 "target": "spare", 00:16:17.873 "progress": { 00:16:17.873 "blocks": 67584, 00:16:17.873 "percent": 53 00:16:17.873 } 00:16:17.873 }, 00:16:17.873 "base_bdevs_list": [ 00:16:17.873 { 00:16:17.873 "name": "spare", 00:16:17.873 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:17.873 "is_configured": true, 00:16:17.873 "data_offset": 2048, 00:16:17.873 "data_size": 63488 00:16:17.873 }, 00:16:17.873 { 00:16:17.873 "name": "BaseBdev2", 00:16:17.873 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:17.873 "is_configured": true, 00:16:17.873 "data_offset": 2048, 00:16:17.873 "data_size": 63488 00:16:17.873 }, 00:16:17.873 { 00:16:17.873 "name": "BaseBdev3", 00:16:17.873 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:17.873 "is_configured": true, 00:16:17.873 "data_offset": 2048, 00:16:17.873 "data_size": 63488 00:16:17.873 } 00:16:17.873 ] 00:16:17.873 }' 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.873 08:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.254 "name": "raid_bdev1", 00:16:19.254 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:19.254 "strip_size_kb": 64, 00:16:19.254 "state": "online", 00:16:19.254 "raid_level": "raid5f", 00:16:19.254 "superblock": true, 00:16:19.254 "num_base_bdevs": 3, 00:16:19.254 "num_base_bdevs_discovered": 3, 00:16:19.254 "num_base_bdevs_operational": 3, 00:16:19.254 "process": { 00:16:19.254 "type": "rebuild", 00:16:19.254 "target": "spare", 00:16:19.254 "progress": { 00:16:19.254 "blocks": 92160, 00:16:19.254 "percent": 72 00:16:19.254 } 00:16:19.254 }, 00:16:19.254 "base_bdevs_list": [ 00:16:19.254 { 00:16:19.254 "name": "spare", 00:16:19.254 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:19.254 "is_configured": true, 00:16:19.254 "data_offset": 2048, 00:16:19.254 "data_size": 63488 00:16:19.254 }, 00:16:19.254 { 00:16:19.254 "name": "BaseBdev2", 00:16:19.254 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:19.254 "is_configured": true, 00:16:19.254 "data_offset": 2048, 00:16:19.254 "data_size": 63488 00:16:19.254 }, 00:16:19.254 { 00:16:19.254 "name": "BaseBdev3", 00:16:19.254 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:19.254 "is_configured": true, 00:16:19.254 "data_offset": 2048, 00:16:19.254 "data_size": 63488 00:16:19.254 } 00:16:19.254 ] 00:16:19.254 }' 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.254 08:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.192 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.192 "name": "raid_bdev1", 00:16:20.192 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:20.192 "strip_size_kb": 64, 00:16:20.192 "state": "online", 00:16:20.192 "raid_level": "raid5f", 00:16:20.192 "superblock": true, 00:16:20.192 "num_base_bdevs": 3, 00:16:20.192 "num_base_bdevs_discovered": 3, 00:16:20.192 "num_base_bdevs_operational": 3, 00:16:20.192 "process": { 00:16:20.192 "type": "rebuild", 00:16:20.192 "target": "spare", 00:16:20.192 "progress": { 00:16:20.192 "blocks": 114688, 00:16:20.192 "percent": 90 00:16:20.192 } 00:16:20.193 }, 00:16:20.193 "base_bdevs_list": [ 00:16:20.193 { 00:16:20.193 "name": "spare", 00:16:20.193 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:20.193 "is_configured": true, 00:16:20.193 "data_offset": 2048, 00:16:20.193 "data_size": 63488 00:16:20.193 }, 00:16:20.193 { 00:16:20.193 "name": "BaseBdev2", 00:16:20.193 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:20.193 "is_configured": true, 00:16:20.193 "data_offset": 2048, 00:16:20.193 "data_size": 63488 00:16:20.193 }, 00:16:20.193 { 00:16:20.193 "name": "BaseBdev3", 00:16:20.193 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:20.193 "is_configured": true, 00:16:20.193 "data_offset": 2048, 00:16:20.193 "data_size": 63488 00:16:20.193 } 00:16:20.193 ] 00:16:20.193 }' 00:16:20.193 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.193 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.193 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.193 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.193 08:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.761 [2024-12-13 08:27:32.871051] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:20.761 [2024-12-13 08:27:32.871226] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:20.761 [2024-12-13 08:27:32.871425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.331 "name": "raid_bdev1", 00:16:21.331 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:21.331 "strip_size_kb": 64, 00:16:21.331 "state": "online", 00:16:21.331 "raid_level": "raid5f", 00:16:21.331 "superblock": true, 00:16:21.331 "num_base_bdevs": 3, 00:16:21.331 "num_base_bdevs_discovered": 3, 00:16:21.331 "num_base_bdevs_operational": 3, 00:16:21.331 "base_bdevs_list": [ 00:16:21.331 { 00:16:21.331 "name": "spare", 00:16:21.331 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:21.331 "is_configured": true, 00:16:21.331 "data_offset": 2048, 00:16:21.331 "data_size": 63488 00:16:21.331 }, 00:16:21.331 { 00:16:21.331 "name": "BaseBdev2", 00:16:21.331 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:21.331 "is_configured": true, 00:16:21.331 "data_offset": 2048, 00:16:21.331 "data_size": 63488 00:16:21.331 }, 00:16:21.331 { 00:16:21.331 "name": "BaseBdev3", 00:16:21.331 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:21.331 "is_configured": true, 00:16:21.331 "data_offset": 2048, 00:16:21.331 "data_size": 63488 00:16:21.331 } 00:16:21.331 ] 00:16:21.331 }' 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.331 "name": "raid_bdev1", 00:16:21.331 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:21.331 "strip_size_kb": 64, 00:16:21.331 "state": "online", 00:16:21.331 "raid_level": "raid5f", 00:16:21.331 "superblock": true, 00:16:21.331 "num_base_bdevs": 3, 00:16:21.331 "num_base_bdevs_discovered": 3, 00:16:21.331 "num_base_bdevs_operational": 3, 00:16:21.331 "base_bdevs_list": [ 00:16:21.331 { 00:16:21.331 "name": "spare", 00:16:21.331 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:21.331 "is_configured": true, 00:16:21.331 "data_offset": 2048, 00:16:21.331 "data_size": 63488 00:16:21.331 }, 00:16:21.331 { 00:16:21.331 "name": "BaseBdev2", 00:16:21.331 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:21.331 "is_configured": true, 00:16:21.331 "data_offset": 2048, 00:16:21.331 "data_size": 63488 00:16:21.331 }, 00:16:21.331 { 00:16:21.331 "name": "BaseBdev3", 00:16:21.331 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:21.331 "is_configured": true, 00:16:21.331 "data_offset": 2048, 00:16:21.331 "data_size": 63488 00:16:21.331 } 00:16:21.331 ] 00:16:21.331 }' 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.331 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.613 "name": "raid_bdev1", 00:16:21.613 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:21.613 "strip_size_kb": 64, 00:16:21.613 "state": "online", 00:16:21.613 "raid_level": "raid5f", 00:16:21.613 "superblock": true, 00:16:21.613 "num_base_bdevs": 3, 00:16:21.613 "num_base_bdevs_discovered": 3, 00:16:21.613 "num_base_bdevs_operational": 3, 00:16:21.613 "base_bdevs_list": [ 00:16:21.613 { 00:16:21.613 "name": "spare", 00:16:21.613 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:21.613 "is_configured": true, 00:16:21.613 "data_offset": 2048, 00:16:21.613 "data_size": 63488 00:16:21.613 }, 00:16:21.613 { 00:16:21.613 "name": "BaseBdev2", 00:16:21.613 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:21.613 "is_configured": true, 00:16:21.613 "data_offset": 2048, 00:16:21.613 "data_size": 63488 00:16:21.613 }, 00:16:21.613 { 00:16:21.613 "name": "BaseBdev3", 00:16:21.613 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:21.613 "is_configured": true, 00:16:21.613 "data_offset": 2048, 00:16:21.613 "data_size": 63488 00:16:21.613 } 00:16:21.613 ] 00:16:21.613 }' 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.613 08:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.873 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.873 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.873 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.873 [2024-12-13 08:27:34.169029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.873 [2024-12-13 08:27:34.169152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.874 [2024-12-13 08:27:34.169266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.874 [2024-12-13 08:27:34.169363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.874 [2024-12-13 08:27:34.169381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.874 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:22.134 /dev/nbd0 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.134 1+0 records in 00:16:22.134 1+0 records out 00:16:22.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268603 s, 15.2 MB/s 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.134 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:22.395 /dev/nbd1 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.395 1+0 records in 00:16:22.395 1+0 records out 00:16:22.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484 s, 8.5 MB/s 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.395 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:22.658 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:22.658 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.658 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.658 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.658 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:22.658 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.658 08:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.919 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.180 [2024-12-13 08:27:35.412258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:23.180 [2024-12-13 08:27:35.412337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.180 [2024-12-13 08:27:35.412361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:23.180 [2024-12-13 08:27:35.412375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.180 [2024-12-13 08:27:35.415133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.180 [2024-12-13 08:27:35.415179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:23.180 [2024-12-13 08:27:35.415299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:23.180 [2024-12-13 08:27:35.415361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.180 [2024-12-13 08:27:35.415524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.180 [2024-12-13 08:27:35.415640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.180 spare 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.180 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.180 [2024-12-13 08:27:35.515579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:23.180 [2024-12-13 08:27:35.515648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:23.180 [2024-12-13 08:27:35.516024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:23.181 [2024-12-13 08:27:35.521774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:23.181 [2024-12-13 08:27:35.521798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:23.181 [2024-12-13 08:27:35.522035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.181 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.440 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.440 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.440 "name": "raid_bdev1", 00:16:23.440 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:23.440 "strip_size_kb": 64, 00:16:23.440 "state": "online", 00:16:23.440 "raid_level": "raid5f", 00:16:23.440 "superblock": true, 00:16:23.440 "num_base_bdevs": 3, 00:16:23.440 "num_base_bdevs_discovered": 3, 00:16:23.440 "num_base_bdevs_operational": 3, 00:16:23.440 "base_bdevs_list": [ 00:16:23.441 { 00:16:23.441 "name": "spare", 00:16:23.441 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:23.441 "is_configured": true, 00:16:23.441 "data_offset": 2048, 00:16:23.441 "data_size": 63488 00:16:23.441 }, 00:16:23.441 { 00:16:23.441 "name": "BaseBdev2", 00:16:23.441 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:23.441 "is_configured": true, 00:16:23.441 "data_offset": 2048, 00:16:23.441 "data_size": 63488 00:16:23.441 }, 00:16:23.441 { 00:16:23.441 "name": "BaseBdev3", 00:16:23.441 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:23.441 "is_configured": true, 00:16:23.441 "data_offset": 2048, 00:16:23.441 "data_size": 63488 00:16:23.441 } 00:16:23.441 ] 00:16:23.441 }' 00:16:23.441 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.441 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.700 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.700 "name": "raid_bdev1", 00:16:23.700 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:23.700 "strip_size_kb": 64, 00:16:23.700 "state": "online", 00:16:23.700 "raid_level": "raid5f", 00:16:23.700 "superblock": true, 00:16:23.701 "num_base_bdevs": 3, 00:16:23.701 "num_base_bdevs_discovered": 3, 00:16:23.701 "num_base_bdevs_operational": 3, 00:16:23.701 "base_bdevs_list": [ 00:16:23.701 { 00:16:23.701 "name": "spare", 00:16:23.701 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:23.701 "is_configured": true, 00:16:23.701 "data_offset": 2048, 00:16:23.701 "data_size": 63488 00:16:23.701 }, 00:16:23.701 { 00:16:23.701 "name": "BaseBdev2", 00:16:23.701 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:23.701 "is_configured": true, 00:16:23.701 "data_offset": 2048, 00:16:23.701 "data_size": 63488 00:16:23.701 }, 00:16:23.701 { 00:16:23.701 "name": "BaseBdev3", 00:16:23.701 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:23.701 "is_configured": true, 00:16:23.701 "data_offset": 2048, 00:16:23.701 "data_size": 63488 00:16:23.701 } 00:16:23.701 ] 00:16:23.701 }' 00:16:23.701 08:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.701 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.701 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.961 [2024-12-13 08:27:36.131683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.961 "name": "raid_bdev1", 00:16:23.961 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:23.961 "strip_size_kb": 64, 00:16:23.961 "state": "online", 00:16:23.961 "raid_level": "raid5f", 00:16:23.961 "superblock": true, 00:16:23.961 "num_base_bdevs": 3, 00:16:23.961 "num_base_bdevs_discovered": 2, 00:16:23.961 "num_base_bdevs_operational": 2, 00:16:23.961 "base_bdevs_list": [ 00:16:23.961 { 00:16:23.961 "name": null, 00:16:23.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.961 "is_configured": false, 00:16:23.961 "data_offset": 0, 00:16:23.961 "data_size": 63488 00:16:23.961 }, 00:16:23.961 { 00:16:23.961 "name": "BaseBdev2", 00:16:23.961 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:23.961 "is_configured": true, 00:16:23.961 "data_offset": 2048, 00:16:23.961 "data_size": 63488 00:16:23.961 }, 00:16:23.961 { 00:16:23.961 "name": "BaseBdev3", 00:16:23.961 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:23.961 "is_configured": true, 00:16:23.961 "data_offset": 2048, 00:16:23.961 "data_size": 63488 00:16:23.961 } 00:16:23.961 ] 00:16:23.961 }' 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.961 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.532 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.532 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 [2024-12-13 08:27:36.598993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.532 [2024-12-13 08:27:36.599334] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:24.532 [2024-12-13 08:27:36.599427] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:24.532 [2024-12-13 08:27:36.599505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.532 [2024-12-13 08:27:36.616928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:24.532 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.532 08:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:24.532 [2024-12-13 08:27:36.624919] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.471 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.472 "name": "raid_bdev1", 00:16:25.472 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:25.472 "strip_size_kb": 64, 00:16:25.472 "state": "online", 00:16:25.472 "raid_level": "raid5f", 00:16:25.472 "superblock": true, 00:16:25.472 "num_base_bdevs": 3, 00:16:25.472 "num_base_bdevs_discovered": 3, 00:16:25.472 "num_base_bdevs_operational": 3, 00:16:25.472 "process": { 00:16:25.472 "type": "rebuild", 00:16:25.472 "target": "spare", 00:16:25.472 "progress": { 00:16:25.472 "blocks": 20480, 00:16:25.472 "percent": 16 00:16:25.472 } 00:16:25.472 }, 00:16:25.472 "base_bdevs_list": [ 00:16:25.472 { 00:16:25.472 "name": "spare", 00:16:25.472 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:25.472 "is_configured": true, 00:16:25.472 "data_offset": 2048, 00:16:25.472 "data_size": 63488 00:16:25.472 }, 00:16:25.472 { 00:16:25.472 "name": "BaseBdev2", 00:16:25.472 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:25.472 "is_configured": true, 00:16:25.472 "data_offset": 2048, 00:16:25.472 "data_size": 63488 00:16:25.472 }, 00:16:25.472 { 00:16:25.472 "name": "BaseBdev3", 00:16:25.472 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:25.472 "is_configured": true, 00:16:25.472 "data_offset": 2048, 00:16:25.472 "data_size": 63488 00:16:25.472 } 00:16:25.472 ] 00:16:25.472 }' 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.472 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.472 [2024-12-13 08:27:37.768645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.732 [2024-12-13 08:27:37.835894] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:25.732 [2024-12-13 08:27:37.835990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.732 [2024-12-13 08:27:37.836009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.732 [2024-12-13 08:27:37.836020] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.732 "name": "raid_bdev1", 00:16:25.732 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:25.732 "strip_size_kb": 64, 00:16:25.732 "state": "online", 00:16:25.732 "raid_level": "raid5f", 00:16:25.732 "superblock": true, 00:16:25.732 "num_base_bdevs": 3, 00:16:25.732 "num_base_bdevs_discovered": 2, 00:16:25.732 "num_base_bdevs_operational": 2, 00:16:25.732 "base_bdevs_list": [ 00:16:25.732 { 00:16:25.732 "name": null, 00:16:25.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.732 "is_configured": false, 00:16:25.732 "data_offset": 0, 00:16:25.732 "data_size": 63488 00:16:25.732 }, 00:16:25.732 { 00:16:25.732 "name": "BaseBdev2", 00:16:25.732 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:25.732 "is_configured": true, 00:16:25.732 "data_offset": 2048, 00:16:25.732 "data_size": 63488 00:16:25.732 }, 00:16:25.732 { 00:16:25.732 "name": "BaseBdev3", 00:16:25.732 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:25.732 "is_configured": true, 00:16:25.732 "data_offset": 2048, 00:16:25.732 "data_size": 63488 00:16:25.732 } 00:16:25.732 ] 00:16:25.732 }' 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.732 08:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.301 08:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:26.301 08:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.301 08:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.301 [2024-12-13 08:27:38.381168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:26.301 [2024-12-13 08:27:38.381302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.301 [2024-12-13 08:27:38.381347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:26.301 [2024-12-13 08:27:38.381386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.301 [2024-12-13 08:27:38.382001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.301 [2024-12-13 08:27:38.382076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:26.301 [2024-12-13 08:27:38.382251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:26.301 [2024-12-13 08:27:38.382309] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.301 [2024-12-13 08:27:38.382363] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:26.301 [2024-12-13 08:27:38.382451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.301 [2024-12-13 08:27:38.399389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:26.301 spare 00:16:26.301 08:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.301 08:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:26.301 [2024-12-13 08:27:38.407181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.241 "name": "raid_bdev1", 00:16:27.241 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:27.241 "strip_size_kb": 64, 00:16:27.241 "state": "online", 00:16:27.241 "raid_level": "raid5f", 00:16:27.241 "superblock": true, 00:16:27.241 "num_base_bdevs": 3, 00:16:27.241 "num_base_bdevs_discovered": 3, 00:16:27.241 "num_base_bdevs_operational": 3, 00:16:27.241 "process": { 00:16:27.241 "type": "rebuild", 00:16:27.241 "target": "spare", 00:16:27.241 "progress": { 00:16:27.241 "blocks": 20480, 00:16:27.241 "percent": 16 00:16:27.241 } 00:16:27.241 }, 00:16:27.241 "base_bdevs_list": [ 00:16:27.241 { 00:16:27.241 "name": "spare", 00:16:27.241 "uuid": "702a59a4-46c4-5d3e-85fb-d93dfdfe5bca", 00:16:27.241 "is_configured": true, 00:16:27.241 "data_offset": 2048, 00:16:27.241 "data_size": 63488 00:16:27.241 }, 00:16:27.241 { 00:16:27.241 "name": "BaseBdev2", 00:16:27.241 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:27.241 "is_configured": true, 00:16:27.241 "data_offset": 2048, 00:16:27.241 "data_size": 63488 00:16:27.241 }, 00:16:27.241 { 00:16:27.241 "name": "BaseBdev3", 00:16:27.241 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:27.241 "is_configured": true, 00:16:27.241 "data_offset": 2048, 00:16:27.241 "data_size": 63488 00:16:27.241 } 00:16:27.241 ] 00:16:27.241 }' 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.241 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.241 [2024-12-13 08:27:39.569997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.501 [2024-12-13 08:27:39.617884] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.501 [2024-12-13 08:27:39.617979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.501 [2024-12-13 08:27:39.617999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.501 [2024-12-13 08:27:39.618007] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.501 "name": "raid_bdev1", 00:16:27.501 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:27.501 "strip_size_kb": 64, 00:16:27.501 "state": "online", 00:16:27.501 "raid_level": "raid5f", 00:16:27.501 "superblock": true, 00:16:27.501 "num_base_bdevs": 3, 00:16:27.501 "num_base_bdevs_discovered": 2, 00:16:27.501 "num_base_bdevs_operational": 2, 00:16:27.501 "base_bdevs_list": [ 00:16:27.501 { 00:16:27.501 "name": null, 00:16:27.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.501 "is_configured": false, 00:16:27.501 "data_offset": 0, 00:16:27.501 "data_size": 63488 00:16:27.501 }, 00:16:27.501 { 00:16:27.501 "name": "BaseBdev2", 00:16:27.501 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:27.501 "is_configured": true, 00:16:27.501 "data_offset": 2048, 00:16:27.501 "data_size": 63488 00:16:27.501 }, 00:16:27.501 { 00:16:27.501 "name": "BaseBdev3", 00:16:27.501 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:27.501 "is_configured": true, 00:16:27.501 "data_offset": 2048, 00:16:27.501 "data_size": 63488 00:16:27.501 } 00:16:27.501 ] 00:16:27.501 }' 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.501 08:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.761 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.020 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.020 "name": "raid_bdev1", 00:16:28.020 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:28.020 "strip_size_kb": 64, 00:16:28.020 "state": "online", 00:16:28.020 "raid_level": "raid5f", 00:16:28.020 "superblock": true, 00:16:28.020 "num_base_bdevs": 3, 00:16:28.020 "num_base_bdevs_discovered": 2, 00:16:28.020 "num_base_bdevs_operational": 2, 00:16:28.020 "base_bdevs_list": [ 00:16:28.020 { 00:16:28.020 "name": null, 00:16:28.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.020 "is_configured": false, 00:16:28.020 "data_offset": 0, 00:16:28.020 "data_size": 63488 00:16:28.020 }, 00:16:28.020 { 00:16:28.020 "name": "BaseBdev2", 00:16:28.020 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:28.020 "is_configured": true, 00:16:28.020 "data_offset": 2048, 00:16:28.020 "data_size": 63488 00:16:28.020 }, 00:16:28.020 { 00:16:28.020 "name": "BaseBdev3", 00:16:28.020 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:28.020 "is_configured": true, 00:16:28.020 "data_offset": 2048, 00:16:28.020 "data_size": 63488 00:16:28.020 } 00:16:28.020 ] 00:16:28.020 }' 00:16:28.020 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.020 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.020 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.020 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.021 [2024-12-13 08:27:40.265289] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:28.021 [2024-12-13 08:27:40.265366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.021 [2024-12-13 08:27:40.265397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:28.021 [2024-12-13 08:27:40.265406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.021 [2024-12-13 08:27:40.265880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.021 [2024-12-13 08:27:40.265898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:28.021 [2024-12-13 08:27:40.265988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:28.021 [2024-12-13 08:27:40.266003] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:28.021 [2024-12-13 08:27:40.266026] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:28.021 [2024-12-13 08:27:40.266036] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:28.021 BaseBdev1 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.021 08:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.960 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.219 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.219 "name": "raid_bdev1", 00:16:29.219 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:29.219 "strip_size_kb": 64, 00:16:29.219 "state": "online", 00:16:29.219 "raid_level": "raid5f", 00:16:29.219 "superblock": true, 00:16:29.219 "num_base_bdevs": 3, 00:16:29.219 "num_base_bdevs_discovered": 2, 00:16:29.219 "num_base_bdevs_operational": 2, 00:16:29.219 "base_bdevs_list": [ 00:16:29.219 { 00:16:29.219 "name": null, 00:16:29.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.219 "is_configured": false, 00:16:29.219 "data_offset": 0, 00:16:29.219 "data_size": 63488 00:16:29.219 }, 00:16:29.219 { 00:16:29.219 "name": "BaseBdev2", 00:16:29.219 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:29.219 "is_configured": true, 00:16:29.219 "data_offset": 2048, 00:16:29.219 "data_size": 63488 00:16:29.219 }, 00:16:29.219 { 00:16:29.219 "name": "BaseBdev3", 00:16:29.219 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:29.219 "is_configured": true, 00:16:29.219 "data_offset": 2048, 00:16:29.219 "data_size": 63488 00:16:29.219 } 00:16:29.219 ] 00:16:29.219 }' 00:16:29.219 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.219 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.479 "name": "raid_bdev1", 00:16:29.479 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:29.479 "strip_size_kb": 64, 00:16:29.479 "state": "online", 00:16:29.479 "raid_level": "raid5f", 00:16:29.479 "superblock": true, 00:16:29.479 "num_base_bdevs": 3, 00:16:29.479 "num_base_bdevs_discovered": 2, 00:16:29.479 "num_base_bdevs_operational": 2, 00:16:29.479 "base_bdevs_list": [ 00:16:29.479 { 00:16:29.479 "name": null, 00:16:29.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.479 "is_configured": false, 00:16:29.479 "data_offset": 0, 00:16:29.479 "data_size": 63488 00:16:29.479 }, 00:16:29.479 { 00:16:29.479 "name": "BaseBdev2", 00:16:29.479 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:29.479 "is_configured": true, 00:16:29.479 "data_offset": 2048, 00:16:29.479 "data_size": 63488 00:16:29.479 }, 00:16:29.479 { 00:16:29.479 "name": "BaseBdev3", 00:16:29.479 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:29.479 "is_configured": true, 00:16:29.479 "data_offset": 2048, 00:16:29.479 "data_size": 63488 00:16:29.479 } 00:16:29.479 ] 00:16:29.479 }' 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.479 [2024-12-13 08:27:41.818740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.479 [2024-12-13 08:27:41.818972] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:29.479 [2024-12-13 08:27:41.819043] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:29.479 request: 00:16:29.479 { 00:16:29.479 "base_bdev": "BaseBdev1", 00:16:29.479 "raid_bdev": "raid_bdev1", 00:16:29.479 "method": "bdev_raid_add_base_bdev", 00:16:29.479 "req_id": 1 00:16:29.479 } 00:16:29.479 Got JSON-RPC error response 00:16:29.479 response: 00:16:29.479 { 00:16:29.479 "code": -22, 00:16:29.479 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:29.479 } 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.479 08:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.862 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.862 "name": "raid_bdev1", 00:16:30.862 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:30.862 "strip_size_kb": 64, 00:16:30.862 "state": "online", 00:16:30.862 "raid_level": "raid5f", 00:16:30.862 "superblock": true, 00:16:30.862 "num_base_bdevs": 3, 00:16:30.862 "num_base_bdevs_discovered": 2, 00:16:30.862 "num_base_bdevs_operational": 2, 00:16:30.862 "base_bdevs_list": [ 00:16:30.862 { 00:16:30.862 "name": null, 00:16:30.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.862 "is_configured": false, 00:16:30.862 "data_offset": 0, 00:16:30.862 "data_size": 63488 00:16:30.862 }, 00:16:30.862 { 00:16:30.862 "name": "BaseBdev2", 00:16:30.862 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:30.862 "is_configured": true, 00:16:30.862 "data_offset": 2048, 00:16:30.862 "data_size": 63488 00:16:30.862 }, 00:16:30.862 { 00:16:30.862 "name": "BaseBdev3", 00:16:30.863 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:30.863 "is_configured": true, 00:16:30.863 "data_offset": 2048, 00:16:30.863 "data_size": 63488 00:16:30.863 } 00:16:30.863 ] 00:16:30.863 }' 00:16:30.863 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.863 08:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.121 "name": "raid_bdev1", 00:16:31.121 "uuid": "a59902a8-1e90-4704-839a-1b7373972c48", 00:16:31.121 "strip_size_kb": 64, 00:16:31.121 "state": "online", 00:16:31.121 "raid_level": "raid5f", 00:16:31.121 "superblock": true, 00:16:31.121 "num_base_bdevs": 3, 00:16:31.121 "num_base_bdevs_discovered": 2, 00:16:31.121 "num_base_bdevs_operational": 2, 00:16:31.121 "base_bdevs_list": [ 00:16:31.121 { 00:16:31.121 "name": null, 00:16:31.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.121 "is_configured": false, 00:16:31.121 "data_offset": 0, 00:16:31.121 "data_size": 63488 00:16:31.121 }, 00:16:31.121 { 00:16:31.121 "name": "BaseBdev2", 00:16:31.121 "uuid": "a8f985ef-71ef-58e0-afeb-9d6804a3ab8b", 00:16:31.121 "is_configured": true, 00:16:31.121 "data_offset": 2048, 00:16:31.121 "data_size": 63488 00:16:31.121 }, 00:16:31.121 { 00:16:31.121 "name": "BaseBdev3", 00:16:31.121 "uuid": "52d27dd1-43e2-56a7-80fd-d8a0e638145e", 00:16:31.121 "is_configured": true, 00:16:31.121 "data_offset": 2048, 00:16:31.121 "data_size": 63488 00:16:31.121 } 00:16:31.121 ] 00:16:31.121 }' 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82201 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82201 ']' 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82201 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82201 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.121 killing process with pid 82201 00:16:31.121 Received shutdown signal, test time was about 60.000000 seconds 00:16:31.121 00:16:31.121 Latency(us) 00:16:31.121 [2024-12-13T08:27:43.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.121 [2024-12-13T08:27:43.486Z] =================================================================================================================== 00:16:31.121 [2024-12-13T08:27:43.486Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82201' 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82201 00:16:31.121 [2024-12-13 08:27:43.468049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.121 [2024-12-13 08:27:43.468224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.121 08:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82201 00:16:31.121 [2024-12-13 08:27:43.468303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.121 [2024-12-13 08:27:43.468318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:31.689 [2024-12-13 08:27:43.867530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.661 08:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:32.661 00:16:32.661 real 0m23.274s 00:16:32.661 user 0m29.808s 00:16:32.661 sys 0m2.723s 00:16:32.661 08:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.661 08:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.661 ************************************ 00:16:32.661 END TEST raid5f_rebuild_test_sb 00:16:32.661 ************************************ 00:16:32.919 08:27:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:32.919 08:27:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:32.919 08:27:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:32.919 08:27:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.919 08:27:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.919 ************************************ 00:16:32.919 START TEST raid5f_state_function_test 00:16:32.919 ************************************ 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.919 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82953 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82953' 00:16:32.920 Process raid pid: 82953 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82953 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82953 ']' 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.920 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.920 [2024-12-13 08:27:45.155288] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:16:32.920 [2024-12-13 08:27:45.155501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.179 [2024-12-13 08:27:45.331860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.179 [2024-12-13 08:27:45.454707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.438 [2024-12-13 08:27:45.654787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.438 [2024-12-13 08:27:45.654909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.698 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.698 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:33.698 08:27:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:33.698 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.698 08:27:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.698 [2024-12-13 08:27:46.006614] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.698 [2024-12-13 08:27:46.006732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.698 [2024-12-13 08:27:46.006764] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.698 [2024-12-13 08:27:46.006795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.698 [2024-12-13 08:27:46.006832] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.698 [2024-12-13 08:27:46.006856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.698 [2024-12-13 08:27:46.006877] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.698 [2024-12-13 08:27:46.006901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.698 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.972 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.972 "name": "Existed_Raid", 00:16:33.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.972 "strip_size_kb": 64, 00:16:33.972 "state": "configuring", 00:16:33.972 "raid_level": "raid5f", 00:16:33.972 "superblock": false, 00:16:33.972 "num_base_bdevs": 4, 00:16:33.972 "num_base_bdevs_discovered": 0, 00:16:33.972 "num_base_bdevs_operational": 4, 00:16:33.972 "base_bdevs_list": [ 00:16:33.972 { 00:16:33.972 "name": "BaseBdev1", 00:16:33.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.972 "is_configured": false, 00:16:33.972 "data_offset": 0, 00:16:33.972 "data_size": 0 00:16:33.972 }, 00:16:33.972 { 00:16:33.972 "name": "BaseBdev2", 00:16:33.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.972 "is_configured": false, 00:16:33.972 "data_offset": 0, 00:16:33.972 "data_size": 0 00:16:33.972 }, 00:16:33.972 { 00:16:33.972 "name": "BaseBdev3", 00:16:33.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.972 "is_configured": false, 00:16:33.972 "data_offset": 0, 00:16:33.972 "data_size": 0 00:16:33.972 }, 00:16:33.972 { 00:16:33.972 "name": "BaseBdev4", 00:16:33.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.972 "is_configured": false, 00:16:33.972 "data_offset": 0, 00:16:33.972 "data_size": 0 00:16:33.972 } 00:16:33.972 ] 00:16:33.972 }' 00:16:33.972 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.972 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 [2024-12-13 08:27:46.453777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.232 [2024-12-13 08:27:46.453888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 [2024-12-13 08:27:46.465785] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.232 [2024-12-13 08:27:46.465838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.232 [2024-12-13 08:27:46.465849] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.232 [2024-12-13 08:27:46.465860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.232 [2024-12-13 08:27:46.465868] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.232 [2024-12-13 08:27:46.465878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.232 [2024-12-13 08:27:46.465885] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.232 [2024-12-13 08:27:46.465896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 [2024-12-13 08:27:46.518303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.232 BaseBdev1 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 [ 00:16:34.233 { 00:16:34.233 "name": "BaseBdev1", 00:16:34.233 "aliases": [ 00:16:34.233 "a198b537-0ce1-4a67-bc32-660f4d579cd7" 00:16:34.233 ], 00:16:34.233 "product_name": "Malloc disk", 00:16:34.233 "block_size": 512, 00:16:34.233 "num_blocks": 65536, 00:16:34.233 "uuid": "a198b537-0ce1-4a67-bc32-660f4d579cd7", 00:16:34.233 "assigned_rate_limits": { 00:16:34.233 "rw_ios_per_sec": 0, 00:16:34.233 "rw_mbytes_per_sec": 0, 00:16:34.233 "r_mbytes_per_sec": 0, 00:16:34.233 "w_mbytes_per_sec": 0 00:16:34.233 }, 00:16:34.233 "claimed": true, 00:16:34.233 "claim_type": "exclusive_write", 00:16:34.233 "zoned": false, 00:16:34.233 "supported_io_types": { 00:16:34.233 "read": true, 00:16:34.233 "write": true, 00:16:34.233 "unmap": true, 00:16:34.233 "flush": true, 00:16:34.233 "reset": true, 00:16:34.233 "nvme_admin": false, 00:16:34.233 "nvme_io": false, 00:16:34.233 "nvme_io_md": false, 00:16:34.233 "write_zeroes": true, 00:16:34.233 "zcopy": true, 00:16:34.233 "get_zone_info": false, 00:16:34.233 "zone_management": false, 00:16:34.233 "zone_append": false, 00:16:34.233 "compare": false, 00:16:34.233 "compare_and_write": false, 00:16:34.233 "abort": true, 00:16:34.233 "seek_hole": false, 00:16:34.233 "seek_data": false, 00:16:34.233 "copy": true, 00:16:34.233 "nvme_iov_md": false 00:16:34.233 }, 00:16:34.233 "memory_domains": [ 00:16:34.233 { 00:16:34.233 "dma_device_id": "system", 00:16:34.233 "dma_device_type": 1 00:16:34.233 }, 00:16:34.233 { 00:16:34.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.233 "dma_device_type": 2 00:16:34.233 } 00:16:34.233 ], 00:16:34.233 "driver_specific": {} 00:16:34.233 } 00:16:34.233 ] 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.233 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.492 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.492 "name": "Existed_Raid", 00:16:34.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.492 "strip_size_kb": 64, 00:16:34.492 "state": "configuring", 00:16:34.492 "raid_level": "raid5f", 00:16:34.492 "superblock": false, 00:16:34.492 "num_base_bdevs": 4, 00:16:34.492 "num_base_bdevs_discovered": 1, 00:16:34.492 "num_base_bdevs_operational": 4, 00:16:34.492 "base_bdevs_list": [ 00:16:34.492 { 00:16:34.492 "name": "BaseBdev1", 00:16:34.492 "uuid": "a198b537-0ce1-4a67-bc32-660f4d579cd7", 00:16:34.492 "is_configured": true, 00:16:34.492 "data_offset": 0, 00:16:34.492 "data_size": 65536 00:16:34.492 }, 00:16:34.492 { 00:16:34.492 "name": "BaseBdev2", 00:16:34.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.492 "is_configured": false, 00:16:34.492 "data_offset": 0, 00:16:34.492 "data_size": 0 00:16:34.492 }, 00:16:34.492 { 00:16:34.492 "name": "BaseBdev3", 00:16:34.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.492 "is_configured": false, 00:16:34.492 "data_offset": 0, 00:16:34.492 "data_size": 0 00:16:34.492 }, 00:16:34.492 { 00:16:34.492 "name": "BaseBdev4", 00:16:34.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.492 "is_configured": false, 00:16:34.492 "data_offset": 0, 00:16:34.492 "data_size": 0 00:16:34.492 } 00:16:34.492 ] 00:16:34.492 }' 00:16:34.492 08:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.492 08:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 [2024-12-13 08:27:47.065427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.752 [2024-12-13 08:27:47.065485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 [2024-12-13 08:27:47.073459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.752 [2024-12-13 08:27:47.075246] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.752 [2024-12-13 08:27:47.075292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.752 [2024-12-13 08:27:47.075302] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.752 [2024-12-13 08:27:47.075313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.752 [2024-12-13 08:27:47.075319] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.752 [2024-12-13 08:27:47.075328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.012 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.012 "name": "Existed_Raid", 00:16:35.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.012 "strip_size_kb": 64, 00:16:35.012 "state": "configuring", 00:16:35.012 "raid_level": "raid5f", 00:16:35.012 "superblock": false, 00:16:35.012 "num_base_bdevs": 4, 00:16:35.012 "num_base_bdevs_discovered": 1, 00:16:35.012 "num_base_bdevs_operational": 4, 00:16:35.012 "base_bdevs_list": [ 00:16:35.012 { 00:16:35.012 "name": "BaseBdev1", 00:16:35.012 "uuid": "a198b537-0ce1-4a67-bc32-660f4d579cd7", 00:16:35.012 "is_configured": true, 00:16:35.012 "data_offset": 0, 00:16:35.012 "data_size": 65536 00:16:35.012 }, 00:16:35.012 { 00:16:35.012 "name": "BaseBdev2", 00:16:35.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.012 "is_configured": false, 00:16:35.012 "data_offset": 0, 00:16:35.012 "data_size": 0 00:16:35.012 }, 00:16:35.012 { 00:16:35.012 "name": "BaseBdev3", 00:16:35.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.012 "is_configured": false, 00:16:35.012 "data_offset": 0, 00:16:35.012 "data_size": 0 00:16:35.012 }, 00:16:35.012 { 00:16:35.012 "name": "BaseBdev4", 00:16:35.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.012 "is_configured": false, 00:16:35.012 "data_offset": 0, 00:16:35.012 "data_size": 0 00:16:35.012 } 00:16:35.012 ] 00:16:35.012 }' 00:16:35.012 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.012 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.271 [2024-12-13 08:27:47.540183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.271 BaseBdev2 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:35.271 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.272 [ 00:16:35.272 { 00:16:35.272 "name": "BaseBdev2", 00:16:35.272 "aliases": [ 00:16:35.272 "8e39c730-270d-40f5-875f-3cb4daee5cff" 00:16:35.272 ], 00:16:35.272 "product_name": "Malloc disk", 00:16:35.272 "block_size": 512, 00:16:35.272 "num_blocks": 65536, 00:16:35.272 "uuid": "8e39c730-270d-40f5-875f-3cb4daee5cff", 00:16:35.272 "assigned_rate_limits": { 00:16:35.272 "rw_ios_per_sec": 0, 00:16:35.272 "rw_mbytes_per_sec": 0, 00:16:35.272 "r_mbytes_per_sec": 0, 00:16:35.272 "w_mbytes_per_sec": 0 00:16:35.272 }, 00:16:35.272 "claimed": true, 00:16:35.272 "claim_type": "exclusive_write", 00:16:35.272 "zoned": false, 00:16:35.272 "supported_io_types": { 00:16:35.272 "read": true, 00:16:35.272 "write": true, 00:16:35.272 "unmap": true, 00:16:35.272 "flush": true, 00:16:35.272 "reset": true, 00:16:35.272 "nvme_admin": false, 00:16:35.272 "nvme_io": false, 00:16:35.272 "nvme_io_md": false, 00:16:35.272 "write_zeroes": true, 00:16:35.272 "zcopy": true, 00:16:35.272 "get_zone_info": false, 00:16:35.272 "zone_management": false, 00:16:35.272 "zone_append": false, 00:16:35.272 "compare": false, 00:16:35.272 "compare_and_write": false, 00:16:35.272 "abort": true, 00:16:35.272 "seek_hole": false, 00:16:35.272 "seek_data": false, 00:16:35.272 "copy": true, 00:16:35.272 "nvme_iov_md": false 00:16:35.272 }, 00:16:35.272 "memory_domains": [ 00:16:35.272 { 00:16:35.272 "dma_device_id": "system", 00:16:35.272 "dma_device_type": 1 00:16:35.272 }, 00:16:35.272 { 00:16:35.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.272 "dma_device_type": 2 00:16:35.272 } 00:16:35.272 ], 00:16:35.272 "driver_specific": {} 00:16:35.272 } 00:16:35.272 ] 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.272 "name": "Existed_Raid", 00:16:35.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.272 "strip_size_kb": 64, 00:16:35.272 "state": "configuring", 00:16:35.272 "raid_level": "raid5f", 00:16:35.272 "superblock": false, 00:16:35.272 "num_base_bdevs": 4, 00:16:35.272 "num_base_bdevs_discovered": 2, 00:16:35.272 "num_base_bdevs_operational": 4, 00:16:35.272 "base_bdevs_list": [ 00:16:35.272 { 00:16:35.272 "name": "BaseBdev1", 00:16:35.272 "uuid": "a198b537-0ce1-4a67-bc32-660f4d579cd7", 00:16:35.272 "is_configured": true, 00:16:35.272 "data_offset": 0, 00:16:35.272 "data_size": 65536 00:16:35.272 }, 00:16:35.272 { 00:16:35.272 "name": "BaseBdev2", 00:16:35.272 "uuid": "8e39c730-270d-40f5-875f-3cb4daee5cff", 00:16:35.272 "is_configured": true, 00:16:35.272 "data_offset": 0, 00:16:35.272 "data_size": 65536 00:16:35.272 }, 00:16:35.272 { 00:16:35.272 "name": "BaseBdev3", 00:16:35.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.272 "is_configured": false, 00:16:35.272 "data_offset": 0, 00:16:35.272 "data_size": 0 00:16:35.272 }, 00:16:35.272 { 00:16:35.272 "name": "BaseBdev4", 00:16:35.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.272 "is_configured": false, 00:16:35.272 "data_offset": 0, 00:16:35.272 "data_size": 0 00:16:35.272 } 00:16:35.272 ] 00:16:35.272 }' 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.272 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.841 08:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:35.841 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.841 08:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.841 [2024-12-13 08:27:48.032614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.841 BaseBdev3 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.841 [ 00:16:35.841 { 00:16:35.841 "name": "BaseBdev3", 00:16:35.841 "aliases": [ 00:16:35.841 "4294bac9-2c9e-43ba-af06-f2f7eae0f6b1" 00:16:35.841 ], 00:16:35.841 "product_name": "Malloc disk", 00:16:35.841 "block_size": 512, 00:16:35.841 "num_blocks": 65536, 00:16:35.841 "uuid": "4294bac9-2c9e-43ba-af06-f2f7eae0f6b1", 00:16:35.841 "assigned_rate_limits": { 00:16:35.841 "rw_ios_per_sec": 0, 00:16:35.841 "rw_mbytes_per_sec": 0, 00:16:35.841 "r_mbytes_per_sec": 0, 00:16:35.841 "w_mbytes_per_sec": 0 00:16:35.841 }, 00:16:35.841 "claimed": true, 00:16:35.841 "claim_type": "exclusive_write", 00:16:35.841 "zoned": false, 00:16:35.841 "supported_io_types": { 00:16:35.841 "read": true, 00:16:35.841 "write": true, 00:16:35.841 "unmap": true, 00:16:35.841 "flush": true, 00:16:35.841 "reset": true, 00:16:35.841 "nvme_admin": false, 00:16:35.841 "nvme_io": false, 00:16:35.841 "nvme_io_md": false, 00:16:35.841 "write_zeroes": true, 00:16:35.841 "zcopy": true, 00:16:35.841 "get_zone_info": false, 00:16:35.841 "zone_management": false, 00:16:35.841 "zone_append": false, 00:16:35.841 "compare": false, 00:16:35.841 "compare_and_write": false, 00:16:35.841 "abort": true, 00:16:35.841 "seek_hole": false, 00:16:35.841 "seek_data": false, 00:16:35.841 "copy": true, 00:16:35.841 "nvme_iov_md": false 00:16:35.841 }, 00:16:35.841 "memory_domains": [ 00:16:35.841 { 00:16:35.841 "dma_device_id": "system", 00:16:35.841 "dma_device_type": 1 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.841 "dma_device_type": 2 00:16:35.841 } 00:16:35.841 ], 00:16:35.841 "driver_specific": {} 00:16:35.841 } 00:16:35.841 ] 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.841 "name": "Existed_Raid", 00:16:35.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.841 "strip_size_kb": 64, 00:16:35.841 "state": "configuring", 00:16:35.841 "raid_level": "raid5f", 00:16:35.841 "superblock": false, 00:16:35.841 "num_base_bdevs": 4, 00:16:35.841 "num_base_bdevs_discovered": 3, 00:16:35.841 "num_base_bdevs_operational": 4, 00:16:35.841 "base_bdevs_list": [ 00:16:35.841 { 00:16:35.841 "name": "BaseBdev1", 00:16:35.841 "uuid": "a198b537-0ce1-4a67-bc32-660f4d579cd7", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 65536 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev2", 00:16:35.841 "uuid": "8e39c730-270d-40f5-875f-3cb4daee5cff", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 65536 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev3", 00:16:35.841 "uuid": "4294bac9-2c9e-43ba-af06-f2f7eae0f6b1", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 65536 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev4", 00:16:35.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.841 "is_configured": false, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 0 00:16:35.841 } 00:16:35.841 ] 00:16:35.841 }' 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.841 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.409 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:36.409 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.409 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.409 [2024-12-13 08:27:48.561845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:36.409 [2024-12-13 08:27:48.562004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:36.410 [2024-12-13 08:27:48.562022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:36.410 [2024-12-13 08:27:48.562345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:36.410 [2024-12-13 08:27:48.570873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:36.410 [2024-12-13 08:27:48.570938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:36.410 [2024-12-13 08:27:48.571331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.410 BaseBdev4 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.410 [ 00:16:36.410 { 00:16:36.410 "name": "BaseBdev4", 00:16:36.410 "aliases": [ 00:16:36.410 "8ae4654d-65fa-4db8-87af-d06bace698d7" 00:16:36.410 ], 00:16:36.410 "product_name": "Malloc disk", 00:16:36.410 "block_size": 512, 00:16:36.410 "num_blocks": 65536, 00:16:36.410 "uuid": "8ae4654d-65fa-4db8-87af-d06bace698d7", 00:16:36.410 "assigned_rate_limits": { 00:16:36.410 "rw_ios_per_sec": 0, 00:16:36.410 "rw_mbytes_per_sec": 0, 00:16:36.410 "r_mbytes_per_sec": 0, 00:16:36.410 "w_mbytes_per_sec": 0 00:16:36.410 }, 00:16:36.410 "claimed": true, 00:16:36.410 "claim_type": "exclusive_write", 00:16:36.410 "zoned": false, 00:16:36.410 "supported_io_types": { 00:16:36.410 "read": true, 00:16:36.410 "write": true, 00:16:36.410 "unmap": true, 00:16:36.410 "flush": true, 00:16:36.410 "reset": true, 00:16:36.410 "nvme_admin": false, 00:16:36.410 "nvme_io": false, 00:16:36.410 "nvme_io_md": false, 00:16:36.410 "write_zeroes": true, 00:16:36.410 "zcopy": true, 00:16:36.410 "get_zone_info": false, 00:16:36.410 "zone_management": false, 00:16:36.410 "zone_append": false, 00:16:36.410 "compare": false, 00:16:36.410 "compare_and_write": false, 00:16:36.410 "abort": true, 00:16:36.410 "seek_hole": false, 00:16:36.410 "seek_data": false, 00:16:36.410 "copy": true, 00:16:36.410 "nvme_iov_md": false 00:16:36.410 }, 00:16:36.410 "memory_domains": [ 00:16:36.410 { 00:16:36.410 "dma_device_id": "system", 00:16:36.410 "dma_device_type": 1 00:16:36.410 }, 00:16:36.410 { 00:16:36.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.410 "dma_device_type": 2 00:16:36.410 } 00:16:36.410 ], 00:16:36.410 "driver_specific": {} 00:16:36.410 } 00:16:36.410 ] 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.410 "name": "Existed_Raid", 00:16:36.410 "uuid": "8c66fdf5-7671-4534-8477-6af70f73d501", 00:16:36.410 "strip_size_kb": 64, 00:16:36.410 "state": "online", 00:16:36.410 "raid_level": "raid5f", 00:16:36.410 "superblock": false, 00:16:36.410 "num_base_bdevs": 4, 00:16:36.410 "num_base_bdevs_discovered": 4, 00:16:36.410 "num_base_bdevs_operational": 4, 00:16:36.410 "base_bdevs_list": [ 00:16:36.410 { 00:16:36.410 "name": "BaseBdev1", 00:16:36.410 "uuid": "a198b537-0ce1-4a67-bc32-660f4d579cd7", 00:16:36.410 "is_configured": true, 00:16:36.410 "data_offset": 0, 00:16:36.410 "data_size": 65536 00:16:36.410 }, 00:16:36.410 { 00:16:36.410 "name": "BaseBdev2", 00:16:36.410 "uuid": "8e39c730-270d-40f5-875f-3cb4daee5cff", 00:16:36.410 "is_configured": true, 00:16:36.410 "data_offset": 0, 00:16:36.410 "data_size": 65536 00:16:36.410 }, 00:16:36.410 { 00:16:36.410 "name": "BaseBdev3", 00:16:36.410 "uuid": "4294bac9-2c9e-43ba-af06-f2f7eae0f6b1", 00:16:36.410 "is_configured": true, 00:16:36.410 "data_offset": 0, 00:16:36.410 "data_size": 65536 00:16:36.410 }, 00:16:36.410 { 00:16:36.410 "name": "BaseBdev4", 00:16:36.410 "uuid": "8ae4654d-65fa-4db8-87af-d06bace698d7", 00:16:36.410 "is_configured": true, 00:16:36.410 "data_offset": 0, 00:16:36.410 "data_size": 65536 00:16:36.410 } 00:16:36.410 ] 00:16:36.410 }' 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.410 08:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.978 [2024-12-13 08:27:49.048001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.978 "name": "Existed_Raid", 00:16:36.978 "aliases": [ 00:16:36.978 "8c66fdf5-7671-4534-8477-6af70f73d501" 00:16:36.978 ], 00:16:36.978 "product_name": "Raid Volume", 00:16:36.978 "block_size": 512, 00:16:36.978 "num_blocks": 196608, 00:16:36.978 "uuid": "8c66fdf5-7671-4534-8477-6af70f73d501", 00:16:36.978 "assigned_rate_limits": { 00:16:36.978 "rw_ios_per_sec": 0, 00:16:36.978 "rw_mbytes_per_sec": 0, 00:16:36.978 "r_mbytes_per_sec": 0, 00:16:36.978 "w_mbytes_per_sec": 0 00:16:36.978 }, 00:16:36.978 "claimed": false, 00:16:36.978 "zoned": false, 00:16:36.978 "supported_io_types": { 00:16:36.978 "read": true, 00:16:36.978 "write": true, 00:16:36.978 "unmap": false, 00:16:36.978 "flush": false, 00:16:36.978 "reset": true, 00:16:36.978 "nvme_admin": false, 00:16:36.978 "nvme_io": false, 00:16:36.978 "nvme_io_md": false, 00:16:36.978 "write_zeroes": true, 00:16:36.978 "zcopy": false, 00:16:36.978 "get_zone_info": false, 00:16:36.978 "zone_management": false, 00:16:36.978 "zone_append": false, 00:16:36.978 "compare": false, 00:16:36.978 "compare_and_write": false, 00:16:36.978 "abort": false, 00:16:36.978 "seek_hole": false, 00:16:36.978 "seek_data": false, 00:16:36.978 "copy": false, 00:16:36.978 "nvme_iov_md": false 00:16:36.978 }, 00:16:36.978 "driver_specific": { 00:16:36.978 "raid": { 00:16:36.978 "uuid": "8c66fdf5-7671-4534-8477-6af70f73d501", 00:16:36.978 "strip_size_kb": 64, 00:16:36.978 "state": "online", 00:16:36.978 "raid_level": "raid5f", 00:16:36.978 "superblock": false, 00:16:36.978 "num_base_bdevs": 4, 00:16:36.978 "num_base_bdevs_discovered": 4, 00:16:36.978 "num_base_bdevs_operational": 4, 00:16:36.978 "base_bdevs_list": [ 00:16:36.978 { 00:16:36.978 "name": "BaseBdev1", 00:16:36.978 "uuid": "a198b537-0ce1-4a67-bc32-660f4d579cd7", 00:16:36.978 "is_configured": true, 00:16:36.978 "data_offset": 0, 00:16:36.978 "data_size": 65536 00:16:36.978 }, 00:16:36.978 { 00:16:36.978 "name": "BaseBdev2", 00:16:36.978 "uuid": "8e39c730-270d-40f5-875f-3cb4daee5cff", 00:16:36.978 "is_configured": true, 00:16:36.978 "data_offset": 0, 00:16:36.978 "data_size": 65536 00:16:36.978 }, 00:16:36.978 { 00:16:36.978 "name": "BaseBdev3", 00:16:36.978 "uuid": "4294bac9-2c9e-43ba-af06-f2f7eae0f6b1", 00:16:36.978 "is_configured": true, 00:16:36.978 "data_offset": 0, 00:16:36.978 "data_size": 65536 00:16:36.978 }, 00:16:36.978 { 00:16:36.978 "name": "BaseBdev4", 00:16:36.978 "uuid": "8ae4654d-65fa-4db8-87af-d06bace698d7", 00:16:36.978 "is_configured": true, 00:16:36.978 "data_offset": 0, 00:16:36.978 "data_size": 65536 00:16:36.978 } 00:16:36.978 ] 00:16:36.978 } 00:16:36.978 } 00:16:36.978 }' 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:36.978 BaseBdev2 00:16:36.978 BaseBdev3 00:16:36.978 BaseBdev4' 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.978 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.979 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.321 [2024-12-13 08:27:49.383297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.321 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.322 "name": "Existed_Raid", 00:16:37.322 "uuid": "8c66fdf5-7671-4534-8477-6af70f73d501", 00:16:37.322 "strip_size_kb": 64, 00:16:37.322 "state": "online", 00:16:37.322 "raid_level": "raid5f", 00:16:37.322 "superblock": false, 00:16:37.322 "num_base_bdevs": 4, 00:16:37.322 "num_base_bdevs_discovered": 3, 00:16:37.322 "num_base_bdevs_operational": 3, 00:16:37.322 "base_bdevs_list": [ 00:16:37.322 { 00:16:37.322 "name": null, 00:16:37.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.322 "is_configured": false, 00:16:37.322 "data_offset": 0, 00:16:37.322 "data_size": 65536 00:16:37.322 }, 00:16:37.322 { 00:16:37.322 "name": "BaseBdev2", 00:16:37.322 "uuid": "8e39c730-270d-40f5-875f-3cb4daee5cff", 00:16:37.322 "is_configured": true, 00:16:37.322 "data_offset": 0, 00:16:37.322 "data_size": 65536 00:16:37.322 }, 00:16:37.322 { 00:16:37.322 "name": "BaseBdev3", 00:16:37.322 "uuid": "4294bac9-2c9e-43ba-af06-f2f7eae0f6b1", 00:16:37.322 "is_configured": true, 00:16:37.322 "data_offset": 0, 00:16:37.322 "data_size": 65536 00:16:37.322 }, 00:16:37.322 { 00:16:37.322 "name": "BaseBdev4", 00:16:37.322 "uuid": "8ae4654d-65fa-4db8-87af-d06bace698d7", 00:16:37.322 "is_configured": true, 00:16:37.322 "data_offset": 0, 00:16:37.322 "data_size": 65536 00:16:37.322 } 00:16:37.322 ] 00:16:37.322 }' 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.322 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.891 08:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:37.891 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.891 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.891 [2024-12-13 08:27:50.006213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.892 [2024-12-13 08:27:50.006309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.892 [2024-12-13 08:27:50.101008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.892 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.892 [2024-12-13 08:27:50.160922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.151 [2024-12-13 08:27:50.316979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:38.151 [2024-12-13 08:27:50.317031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.151 BaseBdev2 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.151 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.411 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.411 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:38.411 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.411 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.411 [ 00:16:38.411 { 00:16:38.411 "name": "BaseBdev2", 00:16:38.411 "aliases": [ 00:16:38.411 "765df3a6-1083-449f-ac7a-2de026d6aed1" 00:16:38.411 ], 00:16:38.411 "product_name": "Malloc disk", 00:16:38.411 "block_size": 512, 00:16:38.411 "num_blocks": 65536, 00:16:38.411 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:38.411 "assigned_rate_limits": { 00:16:38.411 "rw_ios_per_sec": 0, 00:16:38.411 "rw_mbytes_per_sec": 0, 00:16:38.411 "r_mbytes_per_sec": 0, 00:16:38.411 "w_mbytes_per_sec": 0 00:16:38.411 }, 00:16:38.411 "claimed": false, 00:16:38.411 "zoned": false, 00:16:38.411 "supported_io_types": { 00:16:38.411 "read": true, 00:16:38.411 "write": true, 00:16:38.411 "unmap": true, 00:16:38.411 "flush": true, 00:16:38.411 "reset": true, 00:16:38.411 "nvme_admin": false, 00:16:38.411 "nvme_io": false, 00:16:38.411 "nvme_io_md": false, 00:16:38.411 "write_zeroes": true, 00:16:38.411 "zcopy": true, 00:16:38.411 "get_zone_info": false, 00:16:38.411 "zone_management": false, 00:16:38.411 "zone_append": false, 00:16:38.411 "compare": false, 00:16:38.411 "compare_and_write": false, 00:16:38.411 "abort": true, 00:16:38.411 "seek_hole": false, 00:16:38.411 "seek_data": false, 00:16:38.411 "copy": true, 00:16:38.411 "nvme_iov_md": false 00:16:38.411 }, 00:16:38.411 "memory_domains": [ 00:16:38.411 { 00:16:38.411 "dma_device_id": "system", 00:16:38.411 "dma_device_type": 1 00:16:38.411 }, 00:16:38.411 { 00:16:38.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.411 "dma_device_type": 2 00:16:38.411 } 00:16:38.411 ], 00:16:38.411 "driver_specific": {} 00:16:38.411 } 00:16:38.411 ] 00:16:38.411 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.411 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.411 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 BaseBdev3 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 [ 00:16:38.412 { 00:16:38.412 "name": "BaseBdev3", 00:16:38.412 "aliases": [ 00:16:38.412 "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8" 00:16:38.412 ], 00:16:38.412 "product_name": "Malloc disk", 00:16:38.412 "block_size": 512, 00:16:38.412 "num_blocks": 65536, 00:16:38.412 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:38.412 "assigned_rate_limits": { 00:16:38.412 "rw_ios_per_sec": 0, 00:16:38.412 "rw_mbytes_per_sec": 0, 00:16:38.412 "r_mbytes_per_sec": 0, 00:16:38.412 "w_mbytes_per_sec": 0 00:16:38.412 }, 00:16:38.412 "claimed": false, 00:16:38.412 "zoned": false, 00:16:38.412 "supported_io_types": { 00:16:38.412 "read": true, 00:16:38.412 "write": true, 00:16:38.412 "unmap": true, 00:16:38.412 "flush": true, 00:16:38.412 "reset": true, 00:16:38.412 "nvme_admin": false, 00:16:38.412 "nvme_io": false, 00:16:38.412 "nvme_io_md": false, 00:16:38.412 "write_zeroes": true, 00:16:38.412 "zcopy": true, 00:16:38.412 "get_zone_info": false, 00:16:38.412 "zone_management": false, 00:16:38.412 "zone_append": false, 00:16:38.412 "compare": false, 00:16:38.412 "compare_and_write": false, 00:16:38.412 "abort": true, 00:16:38.412 "seek_hole": false, 00:16:38.412 "seek_data": false, 00:16:38.412 "copy": true, 00:16:38.412 "nvme_iov_md": false 00:16:38.412 }, 00:16:38.412 "memory_domains": [ 00:16:38.412 { 00:16:38.412 "dma_device_id": "system", 00:16:38.412 "dma_device_type": 1 00:16:38.412 }, 00:16:38.412 { 00:16:38.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.412 "dma_device_type": 2 00:16:38.412 } 00:16:38.412 ], 00:16:38.412 "driver_specific": {} 00:16:38.412 } 00:16:38.412 ] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 BaseBdev4 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 [ 00:16:38.412 { 00:16:38.412 "name": "BaseBdev4", 00:16:38.412 "aliases": [ 00:16:38.412 "07aac8ec-1df0-4177-a14f-d469faeb5210" 00:16:38.412 ], 00:16:38.412 "product_name": "Malloc disk", 00:16:38.412 "block_size": 512, 00:16:38.412 "num_blocks": 65536, 00:16:38.412 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:38.412 "assigned_rate_limits": { 00:16:38.412 "rw_ios_per_sec": 0, 00:16:38.412 "rw_mbytes_per_sec": 0, 00:16:38.412 "r_mbytes_per_sec": 0, 00:16:38.412 "w_mbytes_per_sec": 0 00:16:38.412 }, 00:16:38.412 "claimed": false, 00:16:38.412 "zoned": false, 00:16:38.412 "supported_io_types": { 00:16:38.412 "read": true, 00:16:38.412 "write": true, 00:16:38.412 "unmap": true, 00:16:38.412 "flush": true, 00:16:38.412 "reset": true, 00:16:38.412 "nvme_admin": false, 00:16:38.412 "nvme_io": false, 00:16:38.412 "nvme_io_md": false, 00:16:38.412 "write_zeroes": true, 00:16:38.412 "zcopy": true, 00:16:38.412 "get_zone_info": false, 00:16:38.412 "zone_management": false, 00:16:38.412 "zone_append": false, 00:16:38.412 "compare": false, 00:16:38.412 "compare_and_write": false, 00:16:38.412 "abort": true, 00:16:38.412 "seek_hole": false, 00:16:38.412 "seek_data": false, 00:16:38.412 "copy": true, 00:16:38.412 "nvme_iov_md": false 00:16:38.412 }, 00:16:38.412 "memory_domains": [ 00:16:38.412 { 00:16:38.412 "dma_device_id": "system", 00:16:38.412 "dma_device_type": 1 00:16:38.412 }, 00:16:38.412 { 00:16:38.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.412 "dma_device_type": 2 00:16:38.412 } 00:16:38.412 ], 00:16:38.412 "driver_specific": {} 00:16:38.412 } 00:16:38.412 ] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 [2024-12-13 08:27:50.721650] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.412 [2024-12-13 08:27:50.721755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.412 [2024-12-13 08:27:50.721803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.412 [2024-12-13 08:27:50.723690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.412 [2024-12-13 08:27:50.723788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.672 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.672 "name": "Existed_Raid", 00:16:38.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.672 "strip_size_kb": 64, 00:16:38.672 "state": "configuring", 00:16:38.672 "raid_level": "raid5f", 00:16:38.672 "superblock": false, 00:16:38.672 "num_base_bdevs": 4, 00:16:38.672 "num_base_bdevs_discovered": 3, 00:16:38.672 "num_base_bdevs_operational": 4, 00:16:38.672 "base_bdevs_list": [ 00:16:38.672 { 00:16:38.672 "name": "BaseBdev1", 00:16:38.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.672 "is_configured": false, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 0 00:16:38.672 }, 00:16:38.672 { 00:16:38.672 "name": "BaseBdev2", 00:16:38.672 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:38.672 "is_configured": true, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 65536 00:16:38.672 }, 00:16:38.672 { 00:16:38.672 "name": "BaseBdev3", 00:16:38.672 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:38.672 "is_configured": true, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 65536 00:16:38.672 }, 00:16:38.672 { 00:16:38.672 "name": "BaseBdev4", 00:16:38.672 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:38.672 "is_configured": true, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 65536 00:16:38.672 } 00:16:38.672 ] 00:16:38.672 }' 00:16:38.672 08:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.672 08:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.931 [2024-12-13 08:27:51.144957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.931 "name": "Existed_Raid", 00:16:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.931 "strip_size_kb": 64, 00:16:38.931 "state": "configuring", 00:16:38.931 "raid_level": "raid5f", 00:16:38.931 "superblock": false, 00:16:38.931 "num_base_bdevs": 4, 00:16:38.931 "num_base_bdevs_discovered": 2, 00:16:38.931 "num_base_bdevs_operational": 4, 00:16:38.931 "base_bdevs_list": [ 00:16:38.931 { 00:16:38.931 "name": "BaseBdev1", 00:16:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.931 "is_configured": false, 00:16:38.931 "data_offset": 0, 00:16:38.931 "data_size": 0 00:16:38.931 }, 00:16:38.931 { 00:16:38.931 "name": null, 00:16:38.931 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:38.931 "is_configured": false, 00:16:38.931 "data_offset": 0, 00:16:38.931 "data_size": 65536 00:16:38.931 }, 00:16:38.931 { 00:16:38.931 "name": "BaseBdev3", 00:16:38.931 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:38.931 "is_configured": true, 00:16:38.931 "data_offset": 0, 00:16:38.931 "data_size": 65536 00:16:38.931 }, 00:16:38.931 { 00:16:38.931 "name": "BaseBdev4", 00:16:38.931 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:38.931 "is_configured": true, 00:16:38.931 "data_offset": 0, 00:16:38.931 "data_size": 65536 00:16:38.931 } 00:16:38.931 ] 00:16:38.931 }' 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.931 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.523 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:39.523 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 [2024-12-13 08:27:51.658329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.524 BaseBdev1 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 [ 00:16:39.524 { 00:16:39.524 "name": "BaseBdev1", 00:16:39.524 "aliases": [ 00:16:39.524 "ebca29d7-3c93-4464-8948-9ca4742e103e" 00:16:39.524 ], 00:16:39.524 "product_name": "Malloc disk", 00:16:39.524 "block_size": 512, 00:16:39.524 "num_blocks": 65536, 00:16:39.524 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:39.524 "assigned_rate_limits": { 00:16:39.524 "rw_ios_per_sec": 0, 00:16:39.524 "rw_mbytes_per_sec": 0, 00:16:39.524 "r_mbytes_per_sec": 0, 00:16:39.524 "w_mbytes_per_sec": 0 00:16:39.524 }, 00:16:39.524 "claimed": true, 00:16:39.524 "claim_type": "exclusive_write", 00:16:39.524 "zoned": false, 00:16:39.524 "supported_io_types": { 00:16:39.524 "read": true, 00:16:39.524 "write": true, 00:16:39.524 "unmap": true, 00:16:39.524 "flush": true, 00:16:39.524 "reset": true, 00:16:39.524 "nvme_admin": false, 00:16:39.524 "nvme_io": false, 00:16:39.524 "nvme_io_md": false, 00:16:39.524 "write_zeroes": true, 00:16:39.524 "zcopy": true, 00:16:39.524 "get_zone_info": false, 00:16:39.524 "zone_management": false, 00:16:39.524 "zone_append": false, 00:16:39.524 "compare": false, 00:16:39.524 "compare_and_write": false, 00:16:39.524 "abort": true, 00:16:39.524 "seek_hole": false, 00:16:39.524 "seek_data": false, 00:16:39.524 "copy": true, 00:16:39.524 "nvme_iov_md": false 00:16:39.524 }, 00:16:39.524 "memory_domains": [ 00:16:39.524 { 00:16:39.524 "dma_device_id": "system", 00:16:39.524 "dma_device_type": 1 00:16:39.524 }, 00:16:39.524 { 00:16:39.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.524 "dma_device_type": 2 00:16:39.524 } 00:16:39.524 ], 00:16:39.524 "driver_specific": {} 00:16:39.524 } 00:16:39.524 ] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.524 "name": "Existed_Raid", 00:16:39.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.524 "strip_size_kb": 64, 00:16:39.524 "state": "configuring", 00:16:39.524 "raid_level": "raid5f", 00:16:39.524 "superblock": false, 00:16:39.524 "num_base_bdevs": 4, 00:16:39.524 "num_base_bdevs_discovered": 3, 00:16:39.524 "num_base_bdevs_operational": 4, 00:16:39.524 "base_bdevs_list": [ 00:16:39.524 { 00:16:39.524 "name": "BaseBdev1", 00:16:39.524 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:39.524 "is_configured": true, 00:16:39.524 "data_offset": 0, 00:16:39.524 "data_size": 65536 00:16:39.524 }, 00:16:39.524 { 00:16:39.524 "name": null, 00:16:39.524 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:39.524 "is_configured": false, 00:16:39.524 "data_offset": 0, 00:16:39.524 "data_size": 65536 00:16:39.524 }, 00:16:39.524 { 00:16:39.524 "name": "BaseBdev3", 00:16:39.524 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:39.524 "is_configured": true, 00:16:39.524 "data_offset": 0, 00:16:39.524 "data_size": 65536 00:16:39.524 }, 00:16:39.524 { 00:16:39.524 "name": "BaseBdev4", 00:16:39.524 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:39.524 "is_configured": true, 00:16:39.524 "data_offset": 0, 00:16:39.524 "data_size": 65536 00:16:39.524 } 00:16:39.524 ] 00:16:39.524 }' 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.524 08:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.784 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.784 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.784 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.784 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:40.043 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.044 [2024-12-13 08:27:52.193509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.044 "name": "Existed_Raid", 00:16:40.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.044 "strip_size_kb": 64, 00:16:40.044 "state": "configuring", 00:16:40.044 "raid_level": "raid5f", 00:16:40.044 "superblock": false, 00:16:40.044 "num_base_bdevs": 4, 00:16:40.044 "num_base_bdevs_discovered": 2, 00:16:40.044 "num_base_bdevs_operational": 4, 00:16:40.044 "base_bdevs_list": [ 00:16:40.044 { 00:16:40.044 "name": "BaseBdev1", 00:16:40.044 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:40.044 "is_configured": true, 00:16:40.044 "data_offset": 0, 00:16:40.044 "data_size": 65536 00:16:40.044 }, 00:16:40.044 { 00:16:40.044 "name": null, 00:16:40.044 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:40.044 "is_configured": false, 00:16:40.044 "data_offset": 0, 00:16:40.044 "data_size": 65536 00:16:40.044 }, 00:16:40.044 { 00:16:40.044 "name": null, 00:16:40.044 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:40.044 "is_configured": false, 00:16:40.044 "data_offset": 0, 00:16:40.044 "data_size": 65536 00:16:40.044 }, 00:16:40.044 { 00:16:40.044 "name": "BaseBdev4", 00:16:40.044 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:40.044 "is_configured": true, 00:16:40.044 "data_offset": 0, 00:16:40.044 "data_size": 65536 00:16:40.044 } 00:16:40.044 ] 00:16:40.044 }' 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.044 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.303 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.303 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.303 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.303 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 [2024-12-13 08:27:52.708666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.563 "name": "Existed_Raid", 00:16:40.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.563 "strip_size_kb": 64, 00:16:40.563 "state": "configuring", 00:16:40.563 "raid_level": "raid5f", 00:16:40.563 "superblock": false, 00:16:40.563 "num_base_bdevs": 4, 00:16:40.563 "num_base_bdevs_discovered": 3, 00:16:40.563 "num_base_bdevs_operational": 4, 00:16:40.563 "base_bdevs_list": [ 00:16:40.563 { 00:16:40.563 "name": "BaseBdev1", 00:16:40.563 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:40.563 "is_configured": true, 00:16:40.563 "data_offset": 0, 00:16:40.563 "data_size": 65536 00:16:40.563 }, 00:16:40.563 { 00:16:40.563 "name": null, 00:16:40.563 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:40.563 "is_configured": false, 00:16:40.563 "data_offset": 0, 00:16:40.563 "data_size": 65536 00:16:40.563 }, 00:16:40.563 { 00:16:40.563 "name": "BaseBdev3", 00:16:40.563 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:40.563 "is_configured": true, 00:16:40.563 "data_offset": 0, 00:16:40.563 "data_size": 65536 00:16:40.563 }, 00:16:40.563 { 00:16:40.563 "name": "BaseBdev4", 00:16:40.563 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:40.563 "is_configured": true, 00:16:40.563 "data_offset": 0, 00:16:40.563 "data_size": 65536 00:16:40.563 } 00:16:40.563 ] 00:16:40.563 }' 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.563 08:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.823 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.823 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.823 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.823 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.823 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.082 [2024-12-13 08:27:53.215859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.082 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.083 "name": "Existed_Raid", 00:16:41.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.083 "strip_size_kb": 64, 00:16:41.083 "state": "configuring", 00:16:41.083 "raid_level": "raid5f", 00:16:41.083 "superblock": false, 00:16:41.083 "num_base_bdevs": 4, 00:16:41.083 "num_base_bdevs_discovered": 2, 00:16:41.083 "num_base_bdevs_operational": 4, 00:16:41.083 "base_bdevs_list": [ 00:16:41.083 { 00:16:41.083 "name": null, 00:16:41.083 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:41.083 "is_configured": false, 00:16:41.083 "data_offset": 0, 00:16:41.083 "data_size": 65536 00:16:41.083 }, 00:16:41.083 { 00:16:41.083 "name": null, 00:16:41.083 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:41.083 "is_configured": false, 00:16:41.083 "data_offset": 0, 00:16:41.083 "data_size": 65536 00:16:41.083 }, 00:16:41.083 { 00:16:41.083 "name": "BaseBdev3", 00:16:41.083 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:41.083 "is_configured": true, 00:16:41.083 "data_offset": 0, 00:16:41.083 "data_size": 65536 00:16:41.083 }, 00:16:41.083 { 00:16:41.083 "name": "BaseBdev4", 00:16:41.083 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:41.083 "is_configured": true, 00:16:41.083 "data_offset": 0, 00:16:41.083 "data_size": 65536 00:16:41.083 } 00:16:41.083 ] 00:16:41.083 }' 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.083 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 [2024-12-13 08:27:53.800445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.653 "name": "Existed_Raid", 00:16:41.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.653 "strip_size_kb": 64, 00:16:41.653 "state": "configuring", 00:16:41.653 "raid_level": "raid5f", 00:16:41.653 "superblock": false, 00:16:41.653 "num_base_bdevs": 4, 00:16:41.653 "num_base_bdevs_discovered": 3, 00:16:41.653 "num_base_bdevs_operational": 4, 00:16:41.653 "base_bdevs_list": [ 00:16:41.653 { 00:16:41.653 "name": null, 00:16:41.653 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:41.653 "is_configured": false, 00:16:41.653 "data_offset": 0, 00:16:41.653 "data_size": 65536 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev2", 00:16:41.653 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 0, 00:16:41.653 "data_size": 65536 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev3", 00:16:41.653 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 0, 00:16:41.653 "data_size": 65536 00:16:41.653 }, 00:16:41.653 { 00:16:41.653 "name": "BaseBdev4", 00:16:41.653 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:41.653 "is_configured": true, 00:16:41.653 "data_offset": 0, 00:16:41.653 "data_size": 65536 00:16:41.653 } 00:16:41.653 ] 00:16:41.653 }' 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.653 08:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:41.913 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ebca29d7-3c93-4464-8948-9ca4742e103e 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.173 [2024-12-13 08:27:54.333480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:42.173 [2024-12-13 08:27:54.333621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:42.173 [2024-12-13 08:27:54.333634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:42.173 [2024-12-13 08:27:54.333913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:42.173 [2024-12-13 08:27:54.340948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:42.173 [2024-12-13 08:27:54.340972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:42.173 [2024-12-13 08:27:54.341234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.173 NewBaseBdev 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.173 [ 00:16:42.173 { 00:16:42.173 "name": "NewBaseBdev", 00:16:42.173 "aliases": [ 00:16:42.173 "ebca29d7-3c93-4464-8948-9ca4742e103e" 00:16:42.173 ], 00:16:42.173 "product_name": "Malloc disk", 00:16:42.173 "block_size": 512, 00:16:42.173 "num_blocks": 65536, 00:16:42.173 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:42.173 "assigned_rate_limits": { 00:16:42.173 "rw_ios_per_sec": 0, 00:16:42.173 "rw_mbytes_per_sec": 0, 00:16:42.173 "r_mbytes_per_sec": 0, 00:16:42.173 "w_mbytes_per_sec": 0 00:16:42.173 }, 00:16:42.173 "claimed": true, 00:16:42.173 "claim_type": "exclusive_write", 00:16:42.173 "zoned": false, 00:16:42.173 "supported_io_types": { 00:16:42.173 "read": true, 00:16:42.173 "write": true, 00:16:42.173 "unmap": true, 00:16:42.173 "flush": true, 00:16:42.173 "reset": true, 00:16:42.173 "nvme_admin": false, 00:16:42.173 "nvme_io": false, 00:16:42.173 "nvme_io_md": false, 00:16:42.173 "write_zeroes": true, 00:16:42.173 "zcopy": true, 00:16:42.173 "get_zone_info": false, 00:16:42.173 "zone_management": false, 00:16:42.173 "zone_append": false, 00:16:42.173 "compare": false, 00:16:42.173 "compare_and_write": false, 00:16:42.173 "abort": true, 00:16:42.173 "seek_hole": false, 00:16:42.173 "seek_data": false, 00:16:42.173 "copy": true, 00:16:42.173 "nvme_iov_md": false 00:16:42.173 }, 00:16:42.173 "memory_domains": [ 00:16:42.173 { 00:16:42.173 "dma_device_id": "system", 00:16:42.173 "dma_device_type": 1 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.173 "dma_device_type": 2 00:16:42.173 } 00:16:42.173 ], 00:16:42.173 "driver_specific": {} 00:16:42.173 } 00:16:42.173 ] 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.173 "name": "Existed_Raid", 00:16:42.173 "uuid": "881dd978-b902-4547-bc9a-cee7328b7c42", 00:16:42.173 "strip_size_kb": 64, 00:16:42.173 "state": "online", 00:16:42.173 "raid_level": "raid5f", 00:16:42.173 "superblock": false, 00:16:42.173 "num_base_bdevs": 4, 00:16:42.173 "num_base_bdevs_discovered": 4, 00:16:42.173 "num_base_bdevs_operational": 4, 00:16:42.173 "base_bdevs_list": [ 00:16:42.173 { 00:16:42.173 "name": "NewBaseBdev", 00:16:42.173 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 0, 00:16:42.173 "data_size": 65536 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": "BaseBdev2", 00:16:42.173 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 0, 00:16:42.173 "data_size": 65536 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": "BaseBdev3", 00:16:42.173 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 0, 00:16:42.173 "data_size": 65536 00:16:42.173 }, 00:16:42.173 { 00:16:42.173 "name": "BaseBdev4", 00:16:42.173 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:42.173 "is_configured": true, 00:16:42.173 "data_offset": 0, 00:16:42.173 "data_size": 65536 00:16:42.173 } 00:16:42.173 ] 00:16:42.173 }' 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.173 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.743 [2024-12-13 08:27:54.845473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.743 "name": "Existed_Raid", 00:16:42.743 "aliases": [ 00:16:42.743 "881dd978-b902-4547-bc9a-cee7328b7c42" 00:16:42.743 ], 00:16:42.743 "product_name": "Raid Volume", 00:16:42.743 "block_size": 512, 00:16:42.743 "num_blocks": 196608, 00:16:42.743 "uuid": "881dd978-b902-4547-bc9a-cee7328b7c42", 00:16:42.743 "assigned_rate_limits": { 00:16:42.743 "rw_ios_per_sec": 0, 00:16:42.743 "rw_mbytes_per_sec": 0, 00:16:42.743 "r_mbytes_per_sec": 0, 00:16:42.743 "w_mbytes_per_sec": 0 00:16:42.743 }, 00:16:42.743 "claimed": false, 00:16:42.743 "zoned": false, 00:16:42.743 "supported_io_types": { 00:16:42.743 "read": true, 00:16:42.743 "write": true, 00:16:42.743 "unmap": false, 00:16:42.743 "flush": false, 00:16:42.743 "reset": true, 00:16:42.743 "nvme_admin": false, 00:16:42.743 "nvme_io": false, 00:16:42.743 "nvme_io_md": false, 00:16:42.743 "write_zeroes": true, 00:16:42.743 "zcopy": false, 00:16:42.743 "get_zone_info": false, 00:16:42.743 "zone_management": false, 00:16:42.743 "zone_append": false, 00:16:42.743 "compare": false, 00:16:42.743 "compare_and_write": false, 00:16:42.743 "abort": false, 00:16:42.743 "seek_hole": false, 00:16:42.743 "seek_data": false, 00:16:42.743 "copy": false, 00:16:42.743 "nvme_iov_md": false 00:16:42.743 }, 00:16:42.743 "driver_specific": { 00:16:42.743 "raid": { 00:16:42.743 "uuid": "881dd978-b902-4547-bc9a-cee7328b7c42", 00:16:42.743 "strip_size_kb": 64, 00:16:42.743 "state": "online", 00:16:42.743 "raid_level": "raid5f", 00:16:42.743 "superblock": false, 00:16:42.743 "num_base_bdevs": 4, 00:16:42.743 "num_base_bdevs_discovered": 4, 00:16:42.743 "num_base_bdevs_operational": 4, 00:16:42.743 "base_bdevs_list": [ 00:16:42.743 { 00:16:42.743 "name": "NewBaseBdev", 00:16:42.743 "uuid": "ebca29d7-3c93-4464-8948-9ca4742e103e", 00:16:42.743 "is_configured": true, 00:16:42.743 "data_offset": 0, 00:16:42.743 "data_size": 65536 00:16:42.743 }, 00:16:42.743 { 00:16:42.743 "name": "BaseBdev2", 00:16:42.743 "uuid": "765df3a6-1083-449f-ac7a-2de026d6aed1", 00:16:42.743 "is_configured": true, 00:16:42.743 "data_offset": 0, 00:16:42.743 "data_size": 65536 00:16:42.743 }, 00:16:42.743 { 00:16:42.743 "name": "BaseBdev3", 00:16:42.743 "uuid": "1dd85d4a-43fc-4da8-95f2-51ff82e61fa8", 00:16:42.743 "is_configured": true, 00:16:42.743 "data_offset": 0, 00:16:42.743 "data_size": 65536 00:16:42.743 }, 00:16:42.743 { 00:16:42.743 "name": "BaseBdev4", 00:16:42.743 "uuid": "07aac8ec-1df0-4177-a14f-d469faeb5210", 00:16:42.743 "is_configured": true, 00:16:42.743 "data_offset": 0, 00:16:42.743 "data_size": 65536 00:16:42.743 } 00:16:42.743 ] 00:16:42.743 } 00:16:42.743 } 00:16:42.743 }' 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:42.743 BaseBdev2 00:16:42.743 BaseBdev3 00:16:42.743 BaseBdev4' 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.743 08:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.744 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.002 [2024-12-13 08:27:55.180639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.002 [2024-12-13 08:27:55.180711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.002 [2024-12-13 08:27:55.180808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.002 [2024-12-13 08:27:55.181158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.002 [2024-12-13 08:27:55.181215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82953 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82953 ']' 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82953 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82953 00:16:43.002 killing process with pid 82953 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.002 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.003 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82953' 00:16:43.003 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82953 00:16:43.003 08:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82953 00:16:43.003 [2024-12-13 08:27:55.225728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.570 [2024-12-13 08:27:55.630449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.509 ************************************ 00:16:44.509 END TEST raid5f_state_function_test 00:16:44.509 ************************************ 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:44.509 00:16:44.509 real 0m11.706s 00:16:44.509 user 0m18.548s 00:16:44.509 sys 0m2.150s 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.509 08:27:56 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:44.509 08:27:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:44.509 08:27:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.509 08:27:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.509 ************************************ 00:16:44.509 START TEST raid5f_state_function_test_sb 00:16:44.509 ************************************ 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83626 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83626' 00:16:44.509 Process raid pid: 83626 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83626 00:16:44.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83626 ']' 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.509 08:27:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.769 [2024-12-13 08:27:56.935211] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:16:44.769 [2024-12-13 08:27:56.935432] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.769 [2024-12-13 08:27:57.110947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.029 [2024-12-13 08:27:57.229538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.288 [2024-12-13 08:27:57.430064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.288 [2024-12-13 08:27:57.430109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.548 [2024-12-13 08:27:57.775342] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.548 [2024-12-13 08:27:57.775400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.548 [2024-12-13 08:27:57.775420] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.548 [2024-12-13 08:27:57.775432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.548 [2024-12-13 08:27:57.775440] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.548 [2024-12-13 08:27:57.775450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.548 [2024-12-13 08:27:57.775457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:45.548 [2024-12-13 08:27:57.775466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.548 "name": "Existed_Raid", 00:16:45.548 "uuid": "b428cb10-a6f2-4383-9dec-b41ae57fb786", 00:16:45.548 "strip_size_kb": 64, 00:16:45.548 "state": "configuring", 00:16:45.548 "raid_level": "raid5f", 00:16:45.548 "superblock": true, 00:16:45.548 "num_base_bdevs": 4, 00:16:45.548 "num_base_bdevs_discovered": 0, 00:16:45.548 "num_base_bdevs_operational": 4, 00:16:45.548 "base_bdevs_list": [ 00:16:45.548 { 00:16:45.548 "name": "BaseBdev1", 00:16:45.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.548 "is_configured": false, 00:16:45.548 "data_offset": 0, 00:16:45.548 "data_size": 0 00:16:45.548 }, 00:16:45.548 { 00:16:45.548 "name": "BaseBdev2", 00:16:45.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.548 "is_configured": false, 00:16:45.548 "data_offset": 0, 00:16:45.548 "data_size": 0 00:16:45.548 }, 00:16:45.548 { 00:16:45.548 "name": "BaseBdev3", 00:16:45.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.548 "is_configured": false, 00:16:45.548 "data_offset": 0, 00:16:45.548 "data_size": 0 00:16:45.548 }, 00:16:45.548 { 00:16:45.548 "name": "BaseBdev4", 00:16:45.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.548 "is_configured": false, 00:16:45.548 "data_offset": 0, 00:16:45.548 "data_size": 0 00:16:45.548 } 00:16:45.548 ] 00:16:45.548 }' 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.548 08:27:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 [2024-12-13 08:27:58.218507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.118 [2024-12-13 08:27:58.218612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 [2024-12-13 08:27:58.226500] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.118 [2024-12-13 08:27:58.226582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.118 [2024-12-13 08:27:58.226631] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.118 [2024-12-13 08:27:58.226655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.118 [2024-12-13 08:27:58.226686] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.118 [2024-12-13 08:27:58.226714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.118 [2024-12-13 08:27:58.226750] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:46.118 [2024-12-13 08:27:58.226773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 [2024-12-13 08:27:58.270812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.118 BaseBdev1 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.118 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 [ 00:16:46.118 { 00:16:46.118 "name": "BaseBdev1", 00:16:46.118 "aliases": [ 00:16:46.118 "82f690ce-9a27-4ce6-bf24-7f52536b2358" 00:16:46.118 ], 00:16:46.118 "product_name": "Malloc disk", 00:16:46.118 "block_size": 512, 00:16:46.118 "num_blocks": 65536, 00:16:46.118 "uuid": "82f690ce-9a27-4ce6-bf24-7f52536b2358", 00:16:46.118 "assigned_rate_limits": { 00:16:46.118 "rw_ios_per_sec": 0, 00:16:46.118 "rw_mbytes_per_sec": 0, 00:16:46.118 "r_mbytes_per_sec": 0, 00:16:46.118 "w_mbytes_per_sec": 0 00:16:46.118 }, 00:16:46.118 "claimed": true, 00:16:46.118 "claim_type": "exclusive_write", 00:16:46.118 "zoned": false, 00:16:46.118 "supported_io_types": { 00:16:46.118 "read": true, 00:16:46.118 "write": true, 00:16:46.118 "unmap": true, 00:16:46.119 "flush": true, 00:16:46.119 "reset": true, 00:16:46.119 "nvme_admin": false, 00:16:46.119 "nvme_io": false, 00:16:46.119 "nvme_io_md": false, 00:16:46.119 "write_zeroes": true, 00:16:46.119 "zcopy": true, 00:16:46.119 "get_zone_info": false, 00:16:46.119 "zone_management": false, 00:16:46.119 "zone_append": false, 00:16:46.119 "compare": false, 00:16:46.119 "compare_and_write": false, 00:16:46.119 "abort": true, 00:16:46.119 "seek_hole": false, 00:16:46.119 "seek_data": false, 00:16:46.119 "copy": true, 00:16:46.119 "nvme_iov_md": false 00:16:46.119 }, 00:16:46.119 "memory_domains": [ 00:16:46.119 { 00:16:46.119 "dma_device_id": "system", 00:16:46.119 "dma_device_type": 1 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.119 "dma_device_type": 2 00:16:46.119 } 00:16:46.119 ], 00:16:46.119 "driver_specific": {} 00:16:46.119 } 00:16:46.119 ] 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.119 "name": "Existed_Raid", 00:16:46.119 "uuid": "e8ddd478-bc5f-4fc3-a6a8-c3c23eacc58b", 00:16:46.119 "strip_size_kb": 64, 00:16:46.119 "state": "configuring", 00:16:46.119 "raid_level": "raid5f", 00:16:46.119 "superblock": true, 00:16:46.119 "num_base_bdevs": 4, 00:16:46.119 "num_base_bdevs_discovered": 1, 00:16:46.119 "num_base_bdevs_operational": 4, 00:16:46.119 "base_bdevs_list": [ 00:16:46.119 { 00:16:46.119 "name": "BaseBdev1", 00:16:46.119 "uuid": "82f690ce-9a27-4ce6-bf24-7f52536b2358", 00:16:46.119 "is_configured": true, 00:16:46.119 "data_offset": 2048, 00:16:46.119 "data_size": 63488 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "name": "BaseBdev2", 00:16:46.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.119 "is_configured": false, 00:16:46.119 "data_offset": 0, 00:16:46.119 "data_size": 0 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "name": "BaseBdev3", 00:16:46.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.119 "is_configured": false, 00:16:46.119 "data_offset": 0, 00:16:46.119 "data_size": 0 00:16:46.119 }, 00:16:46.119 { 00:16:46.119 "name": "BaseBdev4", 00:16:46.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.119 "is_configured": false, 00:16:46.119 "data_offset": 0, 00:16:46.119 "data_size": 0 00:16:46.119 } 00:16:46.119 ] 00:16:46.119 }' 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.119 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.379 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.379 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.379 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.379 [2024-12-13 08:27:58.742081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.379 [2024-12-13 08:27:58.742154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.639 [2024-12-13 08:27:58.754142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.639 [2024-12-13 08:27:58.756073] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.639 [2024-12-13 08:27:58.756124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.639 [2024-12-13 08:27:58.756135] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.639 [2024-12-13 08:27:58.756145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.639 [2024-12-13 08:27:58.756152] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:46.639 [2024-12-13 08:27:58.756161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.639 "name": "Existed_Raid", 00:16:46.639 "uuid": "254aa34c-c3dd-427a-8461-8cd684555581", 00:16:46.639 "strip_size_kb": 64, 00:16:46.639 "state": "configuring", 00:16:46.639 "raid_level": "raid5f", 00:16:46.639 "superblock": true, 00:16:46.639 "num_base_bdevs": 4, 00:16:46.639 "num_base_bdevs_discovered": 1, 00:16:46.639 "num_base_bdevs_operational": 4, 00:16:46.639 "base_bdevs_list": [ 00:16:46.639 { 00:16:46.639 "name": "BaseBdev1", 00:16:46.639 "uuid": "82f690ce-9a27-4ce6-bf24-7f52536b2358", 00:16:46.639 "is_configured": true, 00:16:46.639 "data_offset": 2048, 00:16:46.639 "data_size": 63488 00:16:46.639 }, 00:16:46.639 { 00:16:46.639 "name": "BaseBdev2", 00:16:46.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.639 "is_configured": false, 00:16:46.639 "data_offset": 0, 00:16:46.639 "data_size": 0 00:16:46.639 }, 00:16:46.639 { 00:16:46.639 "name": "BaseBdev3", 00:16:46.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.639 "is_configured": false, 00:16:46.639 "data_offset": 0, 00:16:46.639 "data_size": 0 00:16:46.639 }, 00:16:46.639 { 00:16:46.639 "name": "BaseBdev4", 00:16:46.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.639 "is_configured": false, 00:16:46.639 "data_offset": 0, 00:16:46.639 "data_size": 0 00:16:46.639 } 00:16:46.639 ] 00:16:46.639 }' 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.639 08:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.899 [2024-12-13 08:27:59.207366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.899 BaseBdev2 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.899 [ 00:16:46.899 { 00:16:46.899 "name": "BaseBdev2", 00:16:46.899 "aliases": [ 00:16:46.899 "9e4c8a08-72d8-4aa6-842c-e1435fa987dd" 00:16:46.899 ], 00:16:46.899 "product_name": "Malloc disk", 00:16:46.899 "block_size": 512, 00:16:46.899 "num_blocks": 65536, 00:16:46.899 "uuid": "9e4c8a08-72d8-4aa6-842c-e1435fa987dd", 00:16:46.899 "assigned_rate_limits": { 00:16:46.899 "rw_ios_per_sec": 0, 00:16:46.899 "rw_mbytes_per_sec": 0, 00:16:46.899 "r_mbytes_per_sec": 0, 00:16:46.899 "w_mbytes_per_sec": 0 00:16:46.899 }, 00:16:46.899 "claimed": true, 00:16:46.899 "claim_type": "exclusive_write", 00:16:46.899 "zoned": false, 00:16:46.899 "supported_io_types": { 00:16:46.899 "read": true, 00:16:46.899 "write": true, 00:16:46.899 "unmap": true, 00:16:46.899 "flush": true, 00:16:46.899 "reset": true, 00:16:46.899 "nvme_admin": false, 00:16:46.899 "nvme_io": false, 00:16:46.899 "nvme_io_md": false, 00:16:46.899 "write_zeroes": true, 00:16:46.899 "zcopy": true, 00:16:46.899 "get_zone_info": false, 00:16:46.899 "zone_management": false, 00:16:46.899 "zone_append": false, 00:16:46.899 "compare": false, 00:16:46.899 "compare_and_write": false, 00:16:46.899 "abort": true, 00:16:46.899 "seek_hole": false, 00:16:46.899 "seek_data": false, 00:16:46.899 "copy": true, 00:16:46.899 "nvme_iov_md": false 00:16:46.899 }, 00:16:46.899 "memory_domains": [ 00:16:46.899 { 00:16:46.899 "dma_device_id": "system", 00:16:46.899 "dma_device_type": 1 00:16:46.899 }, 00:16:46.899 { 00:16:46.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.899 "dma_device_type": 2 00:16:46.899 } 00:16:46.899 ], 00:16:46.899 "driver_specific": {} 00:16:46.899 } 00:16:46.899 ] 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.899 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.159 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.159 "name": "Existed_Raid", 00:16:47.159 "uuid": "254aa34c-c3dd-427a-8461-8cd684555581", 00:16:47.159 "strip_size_kb": 64, 00:16:47.159 "state": "configuring", 00:16:47.159 "raid_level": "raid5f", 00:16:47.159 "superblock": true, 00:16:47.159 "num_base_bdevs": 4, 00:16:47.159 "num_base_bdevs_discovered": 2, 00:16:47.159 "num_base_bdevs_operational": 4, 00:16:47.159 "base_bdevs_list": [ 00:16:47.159 { 00:16:47.159 "name": "BaseBdev1", 00:16:47.159 "uuid": "82f690ce-9a27-4ce6-bf24-7f52536b2358", 00:16:47.159 "is_configured": true, 00:16:47.159 "data_offset": 2048, 00:16:47.159 "data_size": 63488 00:16:47.159 }, 00:16:47.159 { 00:16:47.159 "name": "BaseBdev2", 00:16:47.159 "uuid": "9e4c8a08-72d8-4aa6-842c-e1435fa987dd", 00:16:47.159 "is_configured": true, 00:16:47.159 "data_offset": 2048, 00:16:47.159 "data_size": 63488 00:16:47.159 }, 00:16:47.159 { 00:16:47.159 "name": "BaseBdev3", 00:16:47.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.159 "is_configured": false, 00:16:47.159 "data_offset": 0, 00:16:47.159 "data_size": 0 00:16:47.159 }, 00:16:47.159 { 00:16:47.159 "name": "BaseBdev4", 00:16:47.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.159 "is_configured": false, 00:16:47.159 "data_offset": 0, 00:16:47.159 "data_size": 0 00:16:47.159 } 00:16:47.159 ] 00:16:47.159 }' 00:16:47.159 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.159 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 [2024-12-13 08:27:59.659063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.419 BaseBdev3 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 [ 00:16:47.419 { 00:16:47.419 "name": "BaseBdev3", 00:16:47.419 "aliases": [ 00:16:47.419 "38dacc3a-b6a3-46b7-a369-8e0f7411aef8" 00:16:47.419 ], 00:16:47.419 "product_name": "Malloc disk", 00:16:47.419 "block_size": 512, 00:16:47.419 "num_blocks": 65536, 00:16:47.419 "uuid": "38dacc3a-b6a3-46b7-a369-8e0f7411aef8", 00:16:47.419 "assigned_rate_limits": { 00:16:47.419 "rw_ios_per_sec": 0, 00:16:47.419 "rw_mbytes_per_sec": 0, 00:16:47.419 "r_mbytes_per_sec": 0, 00:16:47.419 "w_mbytes_per_sec": 0 00:16:47.419 }, 00:16:47.419 "claimed": true, 00:16:47.419 "claim_type": "exclusive_write", 00:16:47.419 "zoned": false, 00:16:47.419 "supported_io_types": { 00:16:47.419 "read": true, 00:16:47.419 "write": true, 00:16:47.419 "unmap": true, 00:16:47.419 "flush": true, 00:16:47.419 "reset": true, 00:16:47.419 "nvme_admin": false, 00:16:47.419 "nvme_io": false, 00:16:47.419 "nvme_io_md": false, 00:16:47.419 "write_zeroes": true, 00:16:47.419 "zcopy": true, 00:16:47.419 "get_zone_info": false, 00:16:47.419 "zone_management": false, 00:16:47.419 "zone_append": false, 00:16:47.419 "compare": false, 00:16:47.419 "compare_and_write": false, 00:16:47.419 "abort": true, 00:16:47.419 "seek_hole": false, 00:16:47.419 "seek_data": false, 00:16:47.419 "copy": true, 00:16:47.419 "nvme_iov_md": false 00:16:47.419 }, 00:16:47.419 "memory_domains": [ 00:16:47.419 { 00:16:47.419 "dma_device_id": "system", 00:16:47.419 "dma_device_type": 1 00:16:47.419 }, 00:16:47.419 { 00:16:47.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.419 "dma_device_type": 2 00:16:47.419 } 00:16:47.419 ], 00:16:47.419 "driver_specific": {} 00:16:47.419 } 00:16:47.419 ] 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.420 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.420 "name": "Existed_Raid", 00:16:47.420 "uuid": "254aa34c-c3dd-427a-8461-8cd684555581", 00:16:47.420 "strip_size_kb": 64, 00:16:47.420 "state": "configuring", 00:16:47.420 "raid_level": "raid5f", 00:16:47.420 "superblock": true, 00:16:47.420 "num_base_bdevs": 4, 00:16:47.420 "num_base_bdevs_discovered": 3, 00:16:47.420 "num_base_bdevs_operational": 4, 00:16:47.420 "base_bdevs_list": [ 00:16:47.420 { 00:16:47.420 "name": "BaseBdev1", 00:16:47.420 "uuid": "82f690ce-9a27-4ce6-bf24-7f52536b2358", 00:16:47.420 "is_configured": true, 00:16:47.420 "data_offset": 2048, 00:16:47.420 "data_size": 63488 00:16:47.420 }, 00:16:47.420 { 00:16:47.420 "name": "BaseBdev2", 00:16:47.420 "uuid": "9e4c8a08-72d8-4aa6-842c-e1435fa987dd", 00:16:47.420 "is_configured": true, 00:16:47.420 "data_offset": 2048, 00:16:47.420 "data_size": 63488 00:16:47.420 }, 00:16:47.420 { 00:16:47.420 "name": "BaseBdev3", 00:16:47.420 "uuid": "38dacc3a-b6a3-46b7-a369-8e0f7411aef8", 00:16:47.420 "is_configured": true, 00:16:47.420 "data_offset": 2048, 00:16:47.420 "data_size": 63488 00:16:47.420 }, 00:16:47.420 { 00:16:47.420 "name": "BaseBdev4", 00:16:47.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.420 "is_configured": false, 00:16:47.420 "data_offset": 0, 00:16:47.420 "data_size": 0 00:16:47.420 } 00:16:47.420 ] 00:16:47.420 }' 00:16:47.420 08:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.420 08:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 [2024-12-13 08:28:00.160982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.990 [2024-12-13 08:28:00.161292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.990 [2024-12-13 08:28:00.161314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:47.990 [2024-12-13 08:28:00.161613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:47.990 BaseBdev4 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 [2024-12-13 08:28:00.168887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.990 [2024-12-13 08:28:00.168955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.990 [2024-12-13 08:28:00.169279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 [ 00:16:47.990 { 00:16:47.990 "name": "BaseBdev4", 00:16:47.990 "aliases": [ 00:16:47.990 "ff51f745-c69e-4f71-a9fc-7adc737cd705" 00:16:47.990 ], 00:16:47.990 "product_name": "Malloc disk", 00:16:47.990 "block_size": 512, 00:16:47.990 "num_blocks": 65536, 00:16:47.990 "uuid": "ff51f745-c69e-4f71-a9fc-7adc737cd705", 00:16:47.990 "assigned_rate_limits": { 00:16:47.990 "rw_ios_per_sec": 0, 00:16:47.990 "rw_mbytes_per_sec": 0, 00:16:47.990 "r_mbytes_per_sec": 0, 00:16:47.990 "w_mbytes_per_sec": 0 00:16:47.990 }, 00:16:47.990 "claimed": true, 00:16:47.990 "claim_type": "exclusive_write", 00:16:47.990 "zoned": false, 00:16:47.990 "supported_io_types": { 00:16:47.990 "read": true, 00:16:47.990 "write": true, 00:16:47.990 "unmap": true, 00:16:47.990 "flush": true, 00:16:47.990 "reset": true, 00:16:47.990 "nvme_admin": false, 00:16:47.990 "nvme_io": false, 00:16:47.990 "nvme_io_md": false, 00:16:47.990 "write_zeroes": true, 00:16:47.990 "zcopy": true, 00:16:47.990 "get_zone_info": false, 00:16:47.990 "zone_management": false, 00:16:47.990 "zone_append": false, 00:16:47.990 "compare": false, 00:16:47.990 "compare_and_write": false, 00:16:47.990 "abort": true, 00:16:47.990 "seek_hole": false, 00:16:47.990 "seek_data": false, 00:16:47.990 "copy": true, 00:16:47.990 "nvme_iov_md": false 00:16:47.990 }, 00:16:47.990 "memory_domains": [ 00:16:47.990 { 00:16:47.990 "dma_device_id": "system", 00:16:47.990 "dma_device_type": 1 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.990 "dma_device_type": 2 00:16:47.990 } 00:16:47.990 ], 00:16:47.990 "driver_specific": {} 00:16:47.990 } 00:16:47.990 ] 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.990 "name": "Existed_Raid", 00:16:47.990 "uuid": "254aa34c-c3dd-427a-8461-8cd684555581", 00:16:47.990 "strip_size_kb": 64, 00:16:47.990 "state": "online", 00:16:47.990 "raid_level": "raid5f", 00:16:47.990 "superblock": true, 00:16:47.990 "num_base_bdevs": 4, 00:16:47.990 "num_base_bdevs_discovered": 4, 00:16:47.990 "num_base_bdevs_operational": 4, 00:16:47.990 "base_bdevs_list": [ 00:16:47.990 { 00:16:47.990 "name": "BaseBdev1", 00:16:47.990 "uuid": "82f690ce-9a27-4ce6-bf24-7f52536b2358", 00:16:47.990 "is_configured": true, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "name": "BaseBdev2", 00:16:47.990 "uuid": "9e4c8a08-72d8-4aa6-842c-e1435fa987dd", 00:16:47.990 "is_configured": true, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "name": "BaseBdev3", 00:16:47.990 "uuid": "38dacc3a-b6a3-46b7-a369-8e0f7411aef8", 00:16:47.990 "is_configured": true, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "name": "BaseBdev4", 00:16:47.990 "uuid": "ff51f745-c69e-4f71-a9fc-7adc737cd705", 00:16:47.990 "is_configured": true, 00:16:47.990 "data_offset": 2048, 00:16:47.990 "data_size": 63488 00:16:47.990 } 00:16:47.990 ] 00:16:47.990 }' 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.990 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 [2024-12-13 08:28:00.689092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:49.376 "name": "Existed_Raid", 00:16:49.376 "aliases": [ 00:16:49.376 "254aa34c-c3dd-427a-8461-8cd684555581" 00:16:49.376 ], 00:16:49.376 "product_name": "Raid Volume", 00:16:49.376 "block_size": 512, 00:16:49.376 "num_blocks": 190464, 00:16:49.376 "uuid": "254aa34c-c3dd-427a-8461-8cd684555581", 00:16:49.376 "assigned_rate_limits": { 00:16:49.376 "rw_ios_per_sec": 0, 00:16:49.376 "rw_mbytes_per_sec": 0, 00:16:49.376 "r_mbytes_per_sec": 0, 00:16:49.376 "w_mbytes_per_sec": 0 00:16:49.376 }, 00:16:49.376 "claimed": false, 00:16:49.376 "zoned": false, 00:16:49.376 "supported_io_types": { 00:16:49.376 "read": true, 00:16:49.376 "write": true, 00:16:49.376 "unmap": false, 00:16:49.376 "flush": false, 00:16:49.376 "reset": true, 00:16:49.376 "nvme_admin": false, 00:16:49.376 "nvme_io": false, 00:16:49.376 "nvme_io_md": false, 00:16:49.376 "write_zeroes": true, 00:16:49.376 "zcopy": false, 00:16:49.376 "get_zone_info": false, 00:16:49.376 "zone_management": false, 00:16:49.376 "zone_append": false, 00:16:49.376 "compare": false, 00:16:49.376 "compare_and_write": false, 00:16:49.376 "abort": false, 00:16:49.376 "seek_hole": false, 00:16:49.376 "seek_data": false, 00:16:49.376 "copy": false, 00:16:49.376 "nvme_iov_md": false 00:16:49.376 }, 00:16:49.376 "driver_specific": { 00:16:49.376 "raid": { 00:16:49.376 "uuid": "254aa34c-c3dd-427a-8461-8cd684555581", 00:16:49.376 "strip_size_kb": 64, 00:16:49.376 "state": "online", 00:16:49.376 "raid_level": "raid5f", 00:16:49.376 "superblock": true, 00:16:49.376 "num_base_bdevs": 4, 00:16:49.376 "num_base_bdevs_discovered": 4, 00:16:49.376 "num_base_bdevs_operational": 4, 00:16:49.376 "base_bdevs_list": [ 00:16:49.376 { 00:16:49.376 "name": "BaseBdev1", 00:16:49.376 "uuid": "82f690ce-9a27-4ce6-bf24-7f52536b2358", 00:16:49.376 "is_configured": true, 00:16:49.376 "data_offset": 2048, 00:16:49.376 "data_size": 63488 00:16:49.376 }, 00:16:49.376 { 00:16:49.376 "name": "BaseBdev2", 00:16:49.376 "uuid": "9e4c8a08-72d8-4aa6-842c-e1435fa987dd", 00:16:49.376 "is_configured": true, 00:16:49.376 "data_offset": 2048, 00:16:49.376 "data_size": 63488 00:16:49.376 }, 00:16:49.376 { 00:16:49.376 "name": "BaseBdev3", 00:16:49.376 "uuid": "38dacc3a-b6a3-46b7-a369-8e0f7411aef8", 00:16:49.376 "is_configured": true, 00:16:49.376 "data_offset": 2048, 00:16:49.376 "data_size": 63488 00:16:49.376 }, 00:16:49.376 { 00:16:49.376 "name": "BaseBdev4", 00:16:49.376 "uuid": "ff51f745-c69e-4f71-a9fc-7adc737cd705", 00:16:49.376 "is_configured": true, 00:16:49.376 "data_offset": 2048, 00:16:49.376 "data_size": 63488 00:16:49.376 } 00:16:49.376 ] 00:16:49.376 } 00:16:49.376 } 00:16:49.376 }' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:49.376 BaseBdev2 00:16:49.376 BaseBdev3 00:16:49.376 BaseBdev4' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.376 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 [2024-12-13 08:28:00.964412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.377 "name": "Existed_Raid", 00:16:49.377 "uuid": "254aa34c-c3dd-427a-8461-8cd684555581", 00:16:49.377 "strip_size_kb": 64, 00:16:49.377 "state": "online", 00:16:49.377 "raid_level": "raid5f", 00:16:49.377 "superblock": true, 00:16:49.377 "num_base_bdevs": 4, 00:16:49.377 "num_base_bdevs_discovered": 3, 00:16:49.377 "num_base_bdevs_operational": 3, 00:16:49.377 "base_bdevs_list": [ 00:16:49.377 { 00:16:49.377 "name": null, 00:16:49.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.377 "is_configured": false, 00:16:49.377 "data_offset": 0, 00:16:49.377 "data_size": 63488 00:16:49.377 }, 00:16:49.377 { 00:16:49.377 "name": "BaseBdev2", 00:16:49.377 "uuid": "9e4c8a08-72d8-4aa6-842c-e1435fa987dd", 00:16:49.377 "is_configured": true, 00:16:49.377 "data_offset": 2048, 00:16:49.377 "data_size": 63488 00:16:49.377 }, 00:16:49.377 { 00:16:49.377 "name": "BaseBdev3", 00:16:49.377 "uuid": "38dacc3a-b6a3-46b7-a369-8e0f7411aef8", 00:16:49.377 "is_configured": true, 00:16:49.377 "data_offset": 2048, 00:16:49.377 "data_size": 63488 00:16:49.377 }, 00:16:49.377 { 00:16:49.377 "name": "BaseBdev4", 00:16:49.377 "uuid": "ff51f745-c69e-4f71-a9fc-7adc737cd705", 00:16:49.377 "is_configured": true, 00:16:49.377 "data_offset": 2048, 00:16:49.377 "data_size": 63488 00:16:49.377 } 00:16:49.377 ] 00:16:49.377 }' 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 [2024-12-13 08:28:01.571601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.377 [2024-12-13 08:28:01.571759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.377 [2024-12-13 08:28:01.665413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 [2024-12-13 08:28:01.725335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.636 [2024-12-13 08:28:01.875509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:49.636 [2024-12-13 08:28:01.875614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.636 08:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.895 BaseBdev2 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.895 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.895 [ 00:16:49.895 { 00:16:49.895 "name": "BaseBdev2", 00:16:49.895 "aliases": [ 00:16:49.895 "791f22e7-fb2e-4268-a4e1-e365cac112bd" 00:16:49.895 ], 00:16:49.895 "product_name": "Malloc disk", 00:16:49.895 "block_size": 512, 00:16:49.895 "num_blocks": 65536, 00:16:49.895 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:49.895 "assigned_rate_limits": { 00:16:49.895 "rw_ios_per_sec": 0, 00:16:49.895 "rw_mbytes_per_sec": 0, 00:16:49.895 "r_mbytes_per_sec": 0, 00:16:49.895 "w_mbytes_per_sec": 0 00:16:49.895 }, 00:16:49.895 "claimed": false, 00:16:49.895 "zoned": false, 00:16:49.895 "supported_io_types": { 00:16:49.895 "read": true, 00:16:49.895 "write": true, 00:16:49.895 "unmap": true, 00:16:49.895 "flush": true, 00:16:49.895 "reset": true, 00:16:49.895 "nvme_admin": false, 00:16:49.895 "nvme_io": false, 00:16:49.895 "nvme_io_md": false, 00:16:49.895 "write_zeroes": true, 00:16:49.895 "zcopy": true, 00:16:49.895 "get_zone_info": false, 00:16:49.895 "zone_management": false, 00:16:49.895 "zone_append": false, 00:16:49.895 "compare": false, 00:16:49.895 "compare_and_write": false, 00:16:49.895 "abort": true, 00:16:49.895 "seek_hole": false, 00:16:49.895 "seek_data": false, 00:16:49.895 "copy": true, 00:16:49.895 "nvme_iov_md": false 00:16:49.895 }, 00:16:49.896 "memory_domains": [ 00:16:49.896 { 00:16:49.896 "dma_device_id": "system", 00:16:49.896 "dma_device_type": 1 00:16:49.896 }, 00:16:49.896 { 00:16:49.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.896 "dma_device_type": 2 00:16:49.896 } 00:16:49.896 ], 00:16:49.896 "driver_specific": {} 00:16:49.896 } 00:16:49.896 ] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.896 BaseBdev3 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.896 [ 00:16:49.896 { 00:16:49.896 "name": "BaseBdev3", 00:16:49.896 "aliases": [ 00:16:49.896 "c10b7ec1-deb8-48da-9a36-45708e08d2a4" 00:16:49.896 ], 00:16:49.896 "product_name": "Malloc disk", 00:16:49.896 "block_size": 512, 00:16:49.896 "num_blocks": 65536, 00:16:49.896 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:49.896 "assigned_rate_limits": { 00:16:49.896 "rw_ios_per_sec": 0, 00:16:49.896 "rw_mbytes_per_sec": 0, 00:16:49.896 "r_mbytes_per_sec": 0, 00:16:49.896 "w_mbytes_per_sec": 0 00:16:49.896 }, 00:16:49.896 "claimed": false, 00:16:49.896 "zoned": false, 00:16:49.896 "supported_io_types": { 00:16:49.896 "read": true, 00:16:49.896 "write": true, 00:16:49.896 "unmap": true, 00:16:49.896 "flush": true, 00:16:49.896 "reset": true, 00:16:49.896 "nvme_admin": false, 00:16:49.896 "nvme_io": false, 00:16:49.896 "nvme_io_md": false, 00:16:49.896 "write_zeroes": true, 00:16:49.896 "zcopy": true, 00:16:49.896 "get_zone_info": false, 00:16:49.896 "zone_management": false, 00:16:49.896 "zone_append": false, 00:16:49.896 "compare": false, 00:16:49.896 "compare_and_write": false, 00:16:49.896 "abort": true, 00:16:49.896 "seek_hole": false, 00:16:49.896 "seek_data": false, 00:16:49.896 "copy": true, 00:16:49.896 "nvme_iov_md": false 00:16:49.896 }, 00:16:49.896 "memory_domains": [ 00:16:49.896 { 00:16:49.896 "dma_device_id": "system", 00:16:49.896 "dma_device_type": 1 00:16:49.896 }, 00:16:49.896 { 00:16:49.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.896 "dma_device_type": 2 00:16:49.896 } 00:16:49.896 ], 00:16:49.896 "driver_specific": {} 00:16:49.896 } 00:16:49.896 ] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.896 BaseBdev4 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.896 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.896 [ 00:16:49.896 { 00:16:49.896 "name": "BaseBdev4", 00:16:49.896 "aliases": [ 00:16:49.896 "b60c522f-aaa7-410a-b6f7-0de41d9284b1" 00:16:49.896 ], 00:16:49.896 "product_name": "Malloc disk", 00:16:49.896 "block_size": 512, 00:16:49.896 "num_blocks": 65536, 00:16:49.896 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:49.896 "assigned_rate_limits": { 00:16:49.896 "rw_ios_per_sec": 0, 00:16:49.896 "rw_mbytes_per_sec": 0, 00:16:49.896 "r_mbytes_per_sec": 0, 00:16:50.155 "w_mbytes_per_sec": 0 00:16:50.155 }, 00:16:50.155 "claimed": false, 00:16:50.155 "zoned": false, 00:16:50.155 "supported_io_types": { 00:16:50.155 "read": true, 00:16:50.155 "write": true, 00:16:50.155 "unmap": true, 00:16:50.155 "flush": true, 00:16:50.155 "reset": true, 00:16:50.155 "nvme_admin": false, 00:16:50.155 "nvme_io": false, 00:16:50.155 "nvme_io_md": false, 00:16:50.155 "write_zeroes": true, 00:16:50.155 "zcopy": true, 00:16:50.155 "get_zone_info": false, 00:16:50.155 "zone_management": false, 00:16:50.155 "zone_append": false, 00:16:50.155 "compare": false, 00:16:50.155 "compare_and_write": false, 00:16:50.155 "abort": true, 00:16:50.155 "seek_hole": false, 00:16:50.155 "seek_data": false, 00:16:50.155 "copy": true, 00:16:50.155 "nvme_iov_md": false 00:16:50.155 }, 00:16:50.155 "memory_domains": [ 00:16:50.155 { 00:16:50.155 "dma_device_id": "system", 00:16:50.155 "dma_device_type": 1 00:16:50.155 }, 00:16:50.155 { 00:16:50.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.156 "dma_device_type": 2 00:16:50.156 } 00:16:50.156 ], 00:16:50.156 "driver_specific": {} 00:16:50.156 } 00:16:50.156 ] 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.156 [2024-12-13 08:28:02.274340] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.156 [2024-12-13 08:28:02.274429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.156 [2024-12-13 08:28:02.274473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.156 [2024-12-13 08:28:02.276303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.156 [2024-12-13 08:28:02.276398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.156 "name": "Existed_Raid", 00:16:50.156 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:50.156 "strip_size_kb": 64, 00:16:50.156 "state": "configuring", 00:16:50.156 "raid_level": "raid5f", 00:16:50.156 "superblock": true, 00:16:50.156 "num_base_bdevs": 4, 00:16:50.156 "num_base_bdevs_discovered": 3, 00:16:50.156 "num_base_bdevs_operational": 4, 00:16:50.156 "base_bdevs_list": [ 00:16:50.156 { 00:16:50.156 "name": "BaseBdev1", 00:16:50.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.156 "is_configured": false, 00:16:50.156 "data_offset": 0, 00:16:50.156 "data_size": 0 00:16:50.156 }, 00:16:50.156 { 00:16:50.156 "name": "BaseBdev2", 00:16:50.156 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:50.156 "is_configured": true, 00:16:50.156 "data_offset": 2048, 00:16:50.156 "data_size": 63488 00:16:50.156 }, 00:16:50.156 { 00:16:50.156 "name": "BaseBdev3", 00:16:50.156 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:50.156 "is_configured": true, 00:16:50.156 "data_offset": 2048, 00:16:50.156 "data_size": 63488 00:16:50.156 }, 00:16:50.156 { 00:16:50.156 "name": "BaseBdev4", 00:16:50.156 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:50.156 "is_configured": true, 00:16:50.156 "data_offset": 2048, 00:16:50.156 "data_size": 63488 00:16:50.156 } 00:16:50.156 ] 00:16:50.156 }' 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.156 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.416 [2024-12-13 08:28:02.733574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.416 "name": "Existed_Raid", 00:16:50.416 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:50.416 "strip_size_kb": 64, 00:16:50.416 "state": "configuring", 00:16:50.416 "raid_level": "raid5f", 00:16:50.416 "superblock": true, 00:16:50.416 "num_base_bdevs": 4, 00:16:50.416 "num_base_bdevs_discovered": 2, 00:16:50.416 "num_base_bdevs_operational": 4, 00:16:50.416 "base_bdevs_list": [ 00:16:50.416 { 00:16:50.416 "name": "BaseBdev1", 00:16:50.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.416 "is_configured": false, 00:16:50.416 "data_offset": 0, 00:16:50.416 "data_size": 0 00:16:50.416 }, 00:16:50.416 { 00:16:50.416 "name": null, 00:16:50.416 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:50.416 "is_configured": false, 00:16:50.416 "data_offset": 0, 00:16:50.416 "data_size": 63488 00:16:50.416 }, 00:16:50.416 { 00:16:50.416 "name": "BaseBdev3", 00:16:50.416 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:50.416 "is_configured": true, 00:16:50.416 "data_offset": 2048, 00:16:50.416 "data_size": 63488 00:16:50.416 }, 00:16:50.416 { 00:16:50.416 "name": "BaseBdev4", 00:16:50.416 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:50.416 "is_configured": true, 00:16:50.416 "data_offset": 2048, 00:16:50.416 "data_size": 63488 00:16:50.416 } 00:16:50.416 ] 00:16:50.416 }' 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.416 08:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.985 [2024-12-13 08:28:03.233318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.985 BaseBdev1 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.985 [ 00:16:50.985 { 00:16:50.985 "name": "BaseBdev1", 00:16:50.985 "aliases": [ 00:16:50.985 "e873e46a-3542-4c52-9876-4691d485d83c" 00:16:50.985 ], 00:16:50.985 "product_name": "Malloc disk", 00:16:50.985 "block_size": 512, 00:16:50.985 "num_blocks": 65536, 00:16:50.985 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:50.985 "assigned_rate_limits": { 00:16:50.985 "rw_ios_per_sec": 0, 00:16:50.985 "rw_mbytes_per_sec": 0, 00:16:50.985 "r_mbytes_per_sec": 0, 00:16:50.985 "w_mbytes_per_sec": 0 00:16:50.985 }, 00:16:50.985 "claimed": true, 00:16:50.985 "claim_type": "exclusive_write", 00:16:50.985 "zoned": false, 00:16:50.985 "supported_io_types": { 00:16:50.985 "read": true, 00:16:50.985 "write": true, 00:16:50.985 "unmap": true, 00:16:50.985 "flush": true, 00:16:50.985 "reset": true, 00:16:50.985 "nvme_admin": false, 00:16:50.985 "nvme_io": false, 00:16:50.985 "nvme_io_md": false, 00:16:50.985 "write_zeroes": true, 00:16:50.985 "zcopy": true, 00:16:50.985 "get_zone_info": false, 00:16:50.985 "zone_management": false, 00:16:50.985 "zone_append": false, 00:16:50.985 "compare": false, 00:16:50.985 "compare_and_write": false, 00:16:50.985 "abort": true, 00:16:50.985 "seek_hole": false, 00:16:50.985 "seek_data": false, 00:16:50.985 "copy": true, 00:16:50.985 "nvme_iov_md": false 00:16:50.985 }, 00:16:50.985 "memory_domains": [ 00:16:50.985 { 00:16:50.985 "dma_device_id": "system", 00:16:50.985 "dma_device_type": 1 00:16:50.985 }, 00:16:50.985 { 00:16:50.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.985 "dma_device_type": 2 00:16:50.985 } 00:16:50.985 ], 00:16:50.985 "driver_specific": {} 00:16:50.985 } 00:16:50.985 ] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.985 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.985 "name": "Existed_Raid", 00:16:50.985 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:50.985 "strip_size_kb": 64, 00:16:50.985 "state": "configuring", 00:16:50.985 "raid_level": "raid5f", 00:16:50.985 "superblock": true, 00:16:50.985 "num_base_bdevs": 4, 00:16:50.985 "num_base_bdevs_discovered": 3, 00:16:50.985 "num_base_bdevs_operational": 4, 00:16:50.985 "base_bdevs_list": [ 00:16:50.985 { 00:16:50.985 "name": "BaseBdev1", 00:16:50.985 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:50.985 "is_configured": true, 00:16:50.985 "data_offset": 2048, 00:16:50.985 "data_size": 63488 00:16:50.985 }, 00:16:50.985 { 00:16:50.985 "name": null, 00:16:50.985 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:50.985 "is_configured": false, 00:16:50.986 "data_offset": 0, 00:16:50.986 "data_size": 63488 00:16:50.986 }, 00:16:50.986 { 00:16:50.986 "name": "BaseBdev3", 00:16:50.986 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:50.986 "is_configured": true, 00:16:50.986 "data_offset": 2048, 00:16:50.986 "data_size": 63488 00:16:50.986 }, 00:16:50.986 { 00:16:50.986 "name": "BaseBdev4", 00:16:50.986 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:50.986 "is_configured": true, 00:16:50.986 "data_offset": 2048, 00:16:50.986 "data_size": 63488 00:16:50.986 } 00:16:50.986 ] 00:16:50.986 }' 00:16:50.986 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.986 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.556 [2024-12-13 08:28:03.756514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:51.556 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.557 "name": "Existed_Raid", 00:16:51.557 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:51.557 "strip_size_kb": 64, 00:16:51.557 "state": "configuring", 00:16:51.557 "raid_level": "raid5f", 00:16:51.557 "superblock": true, 00:16:51.557 "num_base_bdevs": 4, 00:16:51.557 "num_base_bdevs_discovered": 2, 00:16:51.557 "num_base_bdevs_operational": 4, 00:16:51.557 "base_bdevs_list": [ 00:16:51.557 { 00:16:51.557 "name": "BaseBdev1", 00:16:51.557 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:51.557 "is_configured": true, 00:16:51.557 "data_offset": 2048, 00:16:51.557 "data_size": 63488 00:16:51.557 }, 00:16:51.557 { 00:16:51.557 "name": null, 00:16:51.557 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:51.557 "is_configured": false, 00:16:51.557 "data_offset": 0, 00:16:51.557 "data_size": 63488 00:16:51.557 }, 00:16:51.557 { 00:16:51.557 "name": null, 00:16:51.557 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:51.557 "is_configured": false, 00:16:51.557 "data_offset": 0, 00:16:51.557 "data_size": 63488 00:16:51.557 }, 00:16:51.557 { 00:16:51.557 "name": "BaseBdev4", 00:16:51.557 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:51.557 "is_configured": true, 00:16:51.557 "data_offset": 2048, 00:16:51.557 "data_size": 63488 00:16:51.557 } 00:16:51.557 ] 00:16:51.557 }' 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.557 08:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.125 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.125 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:52.125 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.125 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.125 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.125 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:52.125 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.126 [2024-12-13 08:28:04.283602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.126 "name": "Existed_Raid", 00:16:52.126 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:52.126 "strip_size_kb": 64, 00:16:52.126 "state": "configuring", 00:16:52.126 "raid_level": "raid5f", 00:16:52.126 "superblock": true, 00:16:52.126 "num_base_bdevs": 4, 00:16:52.126 "num_base_bdevs_discovered": 3, 00:16:52.126 "num_base_bdevs_operational": 4, 00:16:52.126 "base_bdevs_list": [ 00:16:52.126 { 00:16:52.126 "name": "BaseBdev1", 00:16:52.126 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:52.126 "is_configured": true, 00:16:52.126 "data_offset": 2048, 00:16:52.126 "data_size": 63488 00:16:52.126 }, 00:16:52.126 { 00:16:52.126 "name": null, 00:16:52.126 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:52.126 "is_configured": false, 00:16:52.126 "data_offset": 0, 00:16:52.126 "data_size": 63488 00:16:52.126 }, 00:16:52.126 { 00:16:52.126 "name": "BaseBdev3", 00:16:52.126 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:52.126 "is_configured": true, 00:16:52.126 "data_offset": 2048, 00:16:52.126 "data_size": 63488 00:16:52.126 }, 00:16:52.126 { 00:16:52.126 "name": "BaseBdev4", 00:16:52.126 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:52.126 "is_configured": true, 00:16:52.126 "data_offset": 2048, 00:16:52.126 "data_size": 63488 00:16:52.126 } 00:16:52.126 ] 00:16:52.126 }' 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.126 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.385 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:52.385 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.385 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.385 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.385 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.643 [2024-12-13 08:28:04.770829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.643 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.643 "name": "Existed_Raid", 00:16:52.643 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:52.643 "strip_size_kb": 64, 00:16:52.643 "state": "configuring", 00:16:52.643 "raid_level": "raid5f", 00:16:52.643 "superblock": true, 00:16:52.643 "num_base_bdevs": 4, 00:16:52.643 "num_base_bdevs_discovered": 2, 00:16:52.643 "num_base_bdevs_operational": 4, 00:16:52.643 "base_bdevs_list": [ 00:16:52.643 { 00:16:52.643 "name": null, 00:16:52.643 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:52.643 "is_configured": false, 00:16:52.643 "data_offset": 0, 00:16:52.643 "data_size": 63488 00:16:52.643 }, 00:16:52.643 { 00:16:52.643 "name": null, 00:16:52.643 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:52.643 "is_configured": false, 00:16:52.643 "data_offset": 0, 00:16:52.643 "data_size": 63488 00:16:52.643 }, 00:16:52.643 { 00:16:52.643 "name": "BaseBdev3", 00:16:52.643 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:52.643 "is_configured": true, 00:16:52.643 "data_offset": 2048, 00:16:52.643 "data_size": 63488 00:16:52.643 }, 00:16:52.643 { 00:16:52.644 "name": "BaseBdev4", 00:16:52.644 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:52.644 "is_configured": true, 00:16:52.644 "data_offset": 2048, 00:16:52.644 "data_size": 63488 00:16:52.644 } 00:16:52.644 ] 00:16:52.644 }' 00:16:52.644 08:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.644 08:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.210 [2024-12-13 08:28:05.404701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.210 "name": "Existed_Raid", 00:16:53.210 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:53.210 "strip_size_kb": 64, 00:16:53.210 "state": "configuring", 00:16:53.210 "raid_level": "raid5f", 00:16:53.210 "superblock": true, 00:16:53.210 "num_base_bdevs": 4, 00:16:53.210 "num_base_bdevs_discovered": 3, 00:16:53.210 "num_base_bdevs_operational": 4, 00:16:53.210 "base_bdevs_list": [ 00:16:53.210 { 00:16:53.210 "name": null, 00:16:53.210 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:53.210 "is_configured": false, 00:16:53.210 "data_offset": 0, 00:16:53.210 "data_size": 63488 00:16:53.210 }, 00:16:53.210 { 00:16:53.210 "name": "BaseBdev2", 00:16:53.210 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:53.210 "is_configured": true, 00:16:53.210 "data_offset": 2048, 00:16:53.210 "data_size": 63488 00:16:53.210 }, 00:16:53.210 { 00:16:53.210 "name": "BaseBdev3", 00:16:53.210 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:53.210 "is_configured": true, 00:16:53.210 "data_offset": 2048, 00:16:53.210 "data_size": 63488 00:16:53.210 }, 00:16:53.210 { 00:16:53.210 "name": "BaseBdev4", 00:16:53.210 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:53.210 "is_configured": true, 00:16:53.210 "data_offset": 2048, 00:16:53.210 "data_size": 63488 00:16:53.210 } 00:16:53.210 ] 00:16:53.210 }' 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.210 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e873e46a-3542-4c52-9876-4691d485d83c 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 [2024-12-13 08:28:05.953911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:53.806 [2024-12-13 08:28:05.954266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:53.806 [2024-12-13 08:28:05.954285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:53.806 [2024-12-13 08:28:05.954547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:53.806 NewBaseBdev 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 [2024-12-13 08:28:05.961577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:53.806 [2024-12-13 08:28:05.961642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:53.806 [2024-12-13 08:28:05.961903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.806 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 [ 00:16:53.806 { 00:16:53.806 "name": "NewBaseBdev", 00:16:53.806 "aliases": [ 00:16:53.806 "e873e46a-3542-4c52-9876-4691d485d83c" 00:16:53.806 ], 00:16:53.806 "product_name": "Malloc disk", 00:16:53.806 "block_size": 512, 00:16:53.806 "num_blocks": 65536, 00:16:53.806 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:53.806 "assigned_rate_limits": { 00:16:53.806 "rw_ios_per_sec": 0, 00:16:53.806 "rw_mbytes_per_sec": 0, 00:16:53.806 "r_mbytes_per_sec": 0, 00:16:53.806 "w_mbytes_per_sec": 0 00:16:53.806 }, 00:16:53.806 "claimed": true, 00:16:53.806 "claim_type": "exclusive_write", 00:16:53.806 "zoned": false, 00:16:53.806 "supported_io_types": { 00:16:53.806 "read": true, 00:16:53.806 "write": true, 00:16:53.806 "unmap": true, 00:16:53.806 "flush": true, 00:16:53.806 "reset": true, 00:16:53.806 "nvme_admin": false, 00:16:53.806 "nvme_io": false, 00:16:53.806 "nvme_io_md": false, 00:16:53.806 "write_zeroes": true, 00:16:53.806 "zcopy": true, 00:16:53.806 "get_zone_info": false, 00:16:53.806 "zone_management": false, 00:16:53.806 "zone_append": false, 00:16:53.806 "compare": false, 00:16:53.806 "compare_and_write": false, 00:16:53.807 "abort": true, 00:16:53.807 "seek_hole": false, 00:16:53.807 "seek_data": false, 00:16:53.807 "copy": true, 00:16:53.807 "nvme_iov_md": false 00:16:53.807 }, 00:16:53.807 "memory_domains": [ 00:16:53.807 { 00:16:53.807 "dma_device_id": "system", 00:16:53.807 "dma_device_type": 1 00:16:53.807 }, 00:16:53.807 { 00:16:53.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.807 "dma_device_type": 2 00:16:53.807 } 00:16:53.807 ], 00:16:53.807 "driver_specific": {} 00:16:53.807 } 00:16:53.807 ] 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.807 08:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.807 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.807 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.807 "name": "Existed_Raid", 00:16:53.807 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:53.807 "strip_size_kb": 64, 00:16:53.807 "state": "online", 00:16:53.807 "raid_level": "raid5f", 00:16:53.807 "superblock": true, 00:16:53.807 "num_base_bdevs": 4, 00:16:53.807 "num_base_bdevs_discovered": 4, 00:16:53.807 "num_base_bdevs_operational": 4, 00:16:53.807 "base_bdevs_list": [ 00:16:53.807 { 00:16:53.807 "name": "NewBaseBdev", 00:16:53.807 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:53.807 "is_configured": true, 00:16:53.807 "data_offset": 2048, 00:16:53.807 "data_size": 63488 00:16:53.807 }, 00:16:53.807 { 00:16:53.807 "name": "BaseBdev2", 00:16:53.807 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:53.807 "is_configured": true, 00:16:53.807 "data_offset": 2048, 00:16:53.807 "data_size": 63488 00:16:53.807 }, 00:16:53.807 { 00:16:53.807 "name": "BaseBdev3", 00:16:53.807 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:53.807 "is_configured": true, 00:16:53.807 "data_offset": 2048, 00:16:53.807 "data_size": 63488 00:16:53.807 }, 00:16:53.807 { 00:16:53.807 "name": "BaseBdev4", 00:16:53.807 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:53.807 "is_configured": true, 00:16:53.807 "data_offset": 2048, 00:16:53.807 "data_size": 63488 00:16:53.807 } 00:16:53.807 ] 00:16:53.807 }' 00:16:53.807 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.807 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.079 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.079 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.079 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.079 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.079 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.079 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.079 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.080 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.080 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.080 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.080 [2024-12-13 08:28:06.437482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.351 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.351 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.351 "name": "Existed_Raid", 00:16:54.351 "aliases": [ 00:16:54.351 "924aef2b-8f42-4247-8bfa-09cb87b720bf" 00:16:54.351 ], 00:16:54.351 "product_name": "Raid Volume", 00:16:54.351 "block_size": 512, 00:16:54.351 "num_blocks": 190464, 00:16:54.351 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:54.351 "assigned_rate_limits": { 00:16:54.351 "rw_ios_per_sec": 0, 00:16:54.351 "rw_mbytes_per_sec": 0, 00:16:54.351 "r_mbytes_per_sec": 0, 00:16:54.351 "w_mbytes_per_sec": 0 00:16:54.351 }, 00:16:54.351 "claimed": false, 00:16:54.351 "zoned": false, 00:16:54.351 "supported_io_types": { 00:16:54.351 "read": true, 00:16:54.351 "write": true, 00:16:54.351 "unmap": false, 00:16:54.351 "flush": false, 00:16:54.351 "reset": true, 00:16:54.351 "nvme_admin": false, 00:16:54.351 "nvme_io": false, 00:16:54.351 "nvme_io_md": false, 00:16:54.352 "write_zeroes": true, 00:16:54.352 "zcopy": false, 00:16:54.352 "get_zone_info": false, 00:16:54.352 "zone_management": false, 00:16:54.352 "zone_append": false, 00:16:54.352 "compare": false, 00:16:54.352 "compare_and_write": false, 00:16:54.352 "abort": false, 00:16:54.352 "seek_hole": false, 00:16:54.352 "seek_data": false, 00:16:54.352 "copy": false, 00:16:54.352 "nvme_iov_md": false 00:16:54.352 }, 00:16:54.352 "driver_specific": { 00:16:54.352 "raid": { 00:16:54.352 "uuid": "924aef2b-8f42-4247-8bfa-09cb87b720bf", 00:16:54.352 "strip_size_kb": 64, 00:16:54.352 "state": "online", 00:16:54.352 "raid_level": "raid5f", 00:16:54.352 "superblock": true, 00:16:54.352 "num_base_bdevs": 4, 00:16:54.352 "num_base_bdevs_discovered": 4, 00:16:54.352 "num_base_bdevs_operational": 4, 00:16:54.352 "base_bdevs_list": [ 00:16:54.352 { 00:16:54.352 "name": "NewBaseBdev", 00:16:54.352 "uuid": "e873e46a-3542-4c52-9876-4691d485d83c", 00:16:54.352 "is_configured": true, 00:16:54.352 "data_offset": 2048, 00:16:54.352 "data_size": 63488 00:16:54.352 }, 00:16:54.352 { 00:16:54.352 "name": "BaseBdev2", 00:16:54.352 "uuid": "791f22e7-fb2e-4268-a4e1-e365cac112bd", 00:16:54.352 "is_configured": true, 00:16:54.352 "data_offset": 2048, 00:16:54.352 "data_size": 63488 00:16:54.352 }, 00:16:54.352 { 00:16:54.352 "name": "BaseBdev3", 00:16:54.352 "uuid": "c10b7ec1-deb8-48da-9a36-45708e08d2a4", 00:16:54.352 "is_configured": true, 00:16:54.352 "data_offset": 2048, 00:16:54.352 "data_size": 63488 00:16:54.352 }, 00:16:54.352 { 00:16:54.352 "name": "BaseBdev4", 00:16:54.352 "uuid": "b60c522f-aaa7-410a-b6f7-0de41d9284b1", 00:16:54.352 "is_configured": true, 00:16:54.352 "data_offset": 2048, 00:16:54.352 "data_size": 63488 00:16:54.352 } 00:16:54.352 ] 00:16:54.352 } 00:16:54.352 } 00:16:54.352 }' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:54.352 BaseBdev2 00:16:54.352 BaseBdev3 00:16:54.352 BaseBdev4' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.352 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.611 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.611 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.611 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.611 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.612 [2024-12-13 08:28:06.752694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.612 [2024-12-13 08:28:06.752725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.612 [2024-12-13 08:28:06.752799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.612 [2024-12-13 08:28:06.753082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.612 [2024-12-13 08:28:06.753094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83626 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83626 ']' 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83626 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83626 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83626' 00:16:54.612 killing process with pid 83626 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83626 00:16:54.612 [2024-12-13 08:28:06.802157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.612 08:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83626 00:16:54.871 [2024-12-13 08:28:07.195547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.253 08:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:56.253 00:16:56.253 real 0m11.451s 00:16:56.253 user 0m18.186s 00:16:56.253 sys 0m2.100s 00:16:56.253 08:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.253 08:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.253 ************************************ 00:16:56.253 END TEST raid5f_state_function_test_sb 00:16:56.253 ************************************ 00:16:56.253 08:28:08 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:56.253 08:28:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:56.253 08:28:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.253 08:28:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.253 ************************************ 00:16:56.253 START TEST raid5f_superblock_test 00:16:56.253 ************************************ 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84298 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84298 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84298 ']' 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.253 08:28:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.253 [2024-12-13 08:28:08.448854] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:16:56.253 [2024-12-13 08:28:08.448979] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84298 ] 00:16:56.513 [2024-12-13 08:28:08.620690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.513 [2024-12-13 08:28:08.732143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.772 [2024-12-13 08:28:08.931361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.772 [2024-12-13 08:28:08.931423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 malloc1 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 [2024-12-13 08:28:09.333254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.032 [2024-12-13 08:28:09.333357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.032 [2024-12-13 08:28:09.333396] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.032 [2024-12-13 08:28:09.333429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.032 [2024-12-13 08:28:09.335584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.032 [2024-12-13 08:28:09.335659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.032 pt1 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 malloc2 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.032 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 [2024-12-13 08:28:09.394509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.032 [2024-12-13 08:28:09.394606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.032 [2024-12-13 08:28:09.394645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.032 [2024-12-13 08:28:09.394678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.292 [2024-12-13 08:28:09.396808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.292 [2024-12-13 08:28:09.396882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.292 pt2 00:16:57.292 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.292 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.292 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.292 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:57.292 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:57.292 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 malloc3 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 [2024-12-13 08:28:09.462361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.293 [2024-12-13 08:28:09.462454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.293 [2024-12-13 08:28:09.462493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.293 [2024-12-13 08:28:09.462525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.293 [2024-12-13 08:28:09.464628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.293 [2024-12-13 08:28:09.464701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.293 pt3 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 malloc4 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 [2024-12-13 08:28:09.521017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:57.293 [2024-12-13 08:28:09.521135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.293 [2024-12-13 08:28:09.521177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:57.293 [2024-12-13 08:28:09.521212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.293 [2024-12-13 08:28:09.523310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.293 [2024-12-13 08:28:09.523375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:57.293 pt4 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 [2024-12-13 08:28:09.533028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.293 [2024-12-13 08:28:09.534784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.293 [2024-12-13 08:28:09.534931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.293 [2024-12-13 08:28:09.534984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:57.293 [2024-12-13 08:28:09.535199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.293 [2024-12-13 08:28:09.535216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.293 [2024-12-13 08:28:09.535467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:57.293 [2024-12-13 08:28:09.542658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.293 [2024-12-13 08:28:09.542717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.293 [2024-12-13 08:28:09.542912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.293 "name": "raid_bdev1", 00:16:57.293 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:57.293 "strip_size_kb": 64, 00:16:57.293 "state": "online", 00:16:57.293 "raid_level": "raid5f", 00:16:57.293 "superblock": true, 00:16:57.293 "num_base_bdevs": 4, 00:16:57.293 "num_base_bdevs_discovered": 4, 00:16:57.293 "num_base_bdevs_operational": 4, 00:16:57.293 "base_bdevs_list": [ 00:16:57.293 { 00:16:57.293 "name": "pt1", 00:16:57.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.293 "is_configured": true, 00:16:57.293 "data_offset": 2048, 00:16:57.293 "data_size": 63488 00:16:57.293 }, 00:16:57.293 { 00:16:57.293 "name": "pt2", 00:16:57.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.293 "is_configured": true, 00:16:57.293 "data_offset": 2048, 00:16:57.293 "data_size": 63488 00:16:57.293 }, 00:16:57.293 { 00:16:57.293 "name": "pt3", 00:16:57.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.293 "is_configured": true, 00:16:57.293 "data_offset": 2048, 00:16:57.293 "data_size": 63488 00:16:57.293 }, 00:16:57.293 { 00:16:57.293 "name": "pt4", 00:16:57.293 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.293 "is_configured": true, 00:16:57.293 "data_offset": 2048, 00:16:57.293 "data_size": 63488 00:16:57.293 } 00:16:57.293 ] 00:16:57.293 }' 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.293 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.863 08:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.863 [2024-12-13 08:28:10.002995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.863 "name": "raid_bdev1", 00:16:57.863 "aliases": [ 00:16:57.863 "1f6e29ed-f934-46c7-a6c2-22025ad83d8a" 00:16:57.863 ], 00:16:57.863 "product_name": "Raid Volume", 00:16:57.863 "block_size": 512, 00:16:57.863 "num_blocks": 190464, 00:16:57.863 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:57.863 "assigned_rate_limits": { 00:16:57.863 "rw_ios_per_sec": 0, 00:16:57.863 "rw_mbytes_per_sec": 0, 00:16:57.863 "r_mbytes_per_sec": 0, 00:16:57.863 "w_mbytes_per_sec": 0 00:16:57.863 }, 00:16:57.863 "claimed": false, 00:16:57.863 "zoned": false, 00:16:57.863 "supported_io_types": { 00:16:57.863 "read": true, 00:16:57.863 "write": true, 00:16:57.863 "unmap": false, 00:16:57.863 "flush": false, 00:16:57.863 "reset": true, 00:16:57.863 "nvme_admin": false, 00:16:57.863 "nvme_io": false, 00:16:57.863 "nvme_io_md": false, 00:16:57.863 "write_zeroes": true, 00:16:57.863 "zcopy": false, 00:16:57.863 "get_zone_info": false, 00:16:57.863 "zone_management": false, 00:16:57.863 "zone_append": false, 00:16:57.863 "compare": false, 00:16:57.863 "compare_and_write": false, 00:16:57.863 "abort": false, 00:16:57.863 "seek_hole": false, 00:16:57.863 "seek_data": false, 00:16:57.863 "copy": false, 00:16:57.863 "nvme_iov_md": false 00:16:57.863 }, 00:16:57.863 "driver_specific": { 00:16:57.863 "raid": { 00:16:57.863 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:57.863 "strip_size_kb": 64, 00:16:57.863 "state": "online", 00:16:57.863 "raid_level": "raid5f", 00:16:57.863 "superblock": true, 00:16:57.863 "num_base_bdevs": 4, 00:16:57.863 "num_base_bdevs_discovered": 4, 00:16:57.863 "num_base_bdevs_operational": 4, 00:16:57.863 "base_bdevs_list": [ 00:16:57.863 { 00:16:57.863 "name": "pt1", 00:16:57.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.863 "is_configured": true, 00:16:57.863 "data_offset": 2048, 00:16:57.863 "data_size": 63488 00:16:57.863 }, 00:16:57.863 { 00:16:57.863 "name": "pt2", 00:16:57.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.863 "is_configured": true, 00:16:57.863 "data_offset": 2048, 00:16:57.863 "data_size": 63488 00:16:57.863 }, 00:16:57.863 { 00:16:57.863 "name": "pt3", 00:16:57.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.863 "is_configured": true, 00:16:57.863 "data_offset": 2048, 00:16:57.863 "data_size": 63488 00:16:57.863 }, 00:16:57.863 { 00:16:57.863 "name": "pt4", 00:16:57.863 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.863 "is_configured": true, 00:16:57.863 "data_offset": 2048, 00:16:57.863 "data_size": 63488 00:16:57.863 } 00:16:57.863 ] 00:16:57.863 } 00:16:57.863 } 00:16:57.863 }' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:57.863 pt2 00:16:57.863 pt3 00:16:57.863 pt4' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.863 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.864 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.864 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.864 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.864 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:57.864 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.864 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 [2024-12-13 08:28:10.334376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1f6e29ed-f934-46c7-a6c2-22025ad83d8a 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1f6e29ed-f934-46c7-a6c2-22025ad83d8a ']' 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 [2024-12-13 08:28:10.382117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.123 [2024-12-13 08:28:10.382188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.123 [2024-12-13 08:28:10.382277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.123 [2024-12-13 08:28:10.382363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.123 [2024-12-13 08:28:10.382377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.123 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.124 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.383 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 [2024-12-13 08:28:10.545897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:58.384 [2024-12-13 08:28:10.547777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:58.384 [2024-12-13 08:28:10.547872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:58.384 [2024-12-13 08:28:10.547937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:58.384 [2024-12-13 08:28:10.548017] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:58.384 [2024-12-13 08:28:10.548118] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:58.384 [2024-12-13 08:28:10.548181] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:58.384 [2024-12-13 08:28:10.548240] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:58.384 [2024-12-13 08:28:10.548288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.384 [2024-12-13 08:28:10.548320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:58.384 request: 00:16:58.384 { 00:16:58.384 "name": "raid_bdev1", 00:16:58.384 "raid_level": "raid5f", 00:16:58.384 "base_bdevs": [ 00:16:58.384 "malloc1", 00:16:58.384 "malloc2", 00:16:58.384 "malloc3", 00:16:58.384 "malloc4" 00:16:58.384 ], 00:16:58.384 "strip_size_kb": 64, 00:16:58.384 "superblock": false, 00:16:58.384 "method": "bdev_raid_create", 00:16:58.384 "req_id": 1 00:16:58.384 } 00:16:58.384 Got JSON-RPC error response 00:16:58.384 response: 00:16:58.384 { 00:16:58.384 "code": -17, 00:16:58.384 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:58.384 } 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.384 [2024-12-13 08:28:10.609685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.384 [2024-12-13 08:28:10.609737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.384 [2024-12-13 08:28:10.609769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:58.384 [2024-12-13 08:28:10.609779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.384 [2024-12-13 08:28:10.611975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.384 [2024-12-13 08:28:10.612018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.384 [2024-12-13 08:28:10.612090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:58.384 [2024-12-13 08:28:10.612151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.384 pt1 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.384 "name": "raid_bdev1", 00:16:58.384 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:58.384 "strip_size_kb": 64, 00:16:58.384 "state": "configuring", 00:16:58.384 "raid_level": "raid5f", 00:16:58.384 "superblock": true, 00:16:58.384 "num_base_bdevs": 4, 00:16:58.384 "num_base_bdevs_discovered": 1, 00:16:58.384 "num_base_bdevs_operational": 4, 00:16:58.384 "base_bdevs_list": [ 00:16:58.384 { 00:16:58.384 "name": "pt1", 00:16:58.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.384 "is_configured": true, 00:16:58.384 "data_offset": 2048, 00:16:58.384 "data_size": 63488 00:16:58.384 }, 00:16:58.384 { 00:16:58.384 "name": null, 00:16:58.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.384 "is_configured": false, 00:16:58.384 "data_offset": 2048, 00:16:58.384 "data_size": 63488 00:16:58.384 }, 00:16:58.384 { 00:16:58.384 "name": null, 00:16:58.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.384 "is_configured": false, 00:16:58.384 "data_offset": 2048, 00:16:58.384 "data_size": 63488 00:16:58.384 }, 00:16:58.384 { 00:16:58.384 "name": null, 00:16:58.384 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.384 "is_configured": false, 00:16:58.384 "data_offset": 2048, 00:16:58.384 "data_size": 63488 00:16:58.384 } 00:16:58.384 ] 00:16:58.384 }' 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.384 08:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.953 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:58.953 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.953 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.954 [2024-12-13 08:28:11.045007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.954 [2024-12-13 08:28:11.045098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.954 [2024-12-13 08:28:11.045188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:58.954 [2024-12-13 08:28:11.045218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.954 [2024-12-13 08:28:11.045694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.954 [2024-12-13 08:28:11.045757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.954 [2024-12-13 08:28:11.045844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:58.954 [2024-12-13 08:28:11.045870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.954 pt2 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.954 [2024-12-13 08:28:11.056980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.954 "name": "raid_bdev1", 00:16:58.954 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:58.954 "strip_size_kb": 64, 00:16:58.954 "state": "configuring", 00:16:58.954 "raid_level": "raid5f", 00:16:58.954 "superblock": true, 00:16:58.954 "num_base_bdevs": 4, 00:16:58.954 "num_base_bdevs_discovered": 1, 00:16:58.954 "num_base_bdevs_operational": 4, 00:16:58.954 "base_bdevs_list": [ 00:16:58.954 { 00:16:58.954 "name": "pt1", 00:16:58.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.954 "is_configured": true, 00:16:58.954 "data_offset": 2048, 00:16:58.954 "data_size": 63488 00:16:58.954 }, 00:16:58.954 { 00:16:58.954 "name": null, 00:16:58.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.954 "is_configured": false, 00:16:58.954 "data_offset": 0, 00:16:58.954 "data_size": 63488 00:16:58.954 }, 00:16:58.954 { 00:16:58.954 "name": null, 00:16:58.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.954 "is_configured": false, 00:16:58.954 "data_offset": 2048, 00:16:58.954 "data_size": 63488 00:16:58.954 }, 00:16:58.954 { 00:16:58.954 "name": null, 00:16:58.954 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.954 "is_configured": false, 00:16:58.954 "data_offset": 2048, 00:16:58.954 "data_size": 63488 00:16:58.954 } 00:16:58.954 ] 00:16:58.954 }' 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.954 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.213 [2024-12-13 08:28:11.524210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.213 [2024-12-13 08:28:11.524324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.213 [2024-12-13 08:28:11.524362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:59.213 [2024-12-13 08:28:11.524428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.213 [2024-12-13 08:28:11.524904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.213 [2024-12-13 08:28:11.524966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.213 [2024-12-13 08:28:11.525086] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.213 [2024-12-13 08:28:11.525154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.213 pt2 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.213 [2024-12-13 08:28:11.536157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.213 [2024-12-13 08:28:11.536241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.213 [2024-12-13 08:28:11.536290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:59.213 [2024-12-13 08:28:11.536321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.213 [2024-12-13 08:28:11.536699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.213 [2024-12-13 08:28:11.536755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.213 [2024-12-13 08:28:11.536842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:59.213 [2024-12-13 08:28:11.536896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.213 pt3 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.213 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.213 [2024-12-13 08:28:11.548128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:59.213 [2024-12-13 08:28:11.548166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.213 [2024-12-13 08:28:11.548182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:59.213 [2024-12-13 08:28:11.548191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.213 [2024-12-13 08:28:11.548554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.213 [2024-12-13 08:28:11.548571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:59.213 [2024-12-13 08:28:11.548629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:59.213 [2024-12-13 08:28:11.548650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:59.214 [2024-12-13 08:28:11.548783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:59.214 [2024-12-13 08:28:11.548791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:59.214 [2024-12-13 08:28:11.549025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:59.214 [2024-12-13 08:28:11.556587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:59.214 [2024-12-13 08:28:11.556611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:59.214 [2024-12-13 08:28:11.556787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.214 pt4 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.214 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.473 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.473 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.473 "name": "raid_bdev1", 00:16:59.473 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:59.473 "strip_size_kb": 64, 00:16:59.473 "state": "online", 00:16:59.473 "raid_level": "raid5f", 00:16:59.473 "superblock": true, 00:16:59.473 "num_base_bdevs": 4, 00:16:59.473 "num_base_bdevs_discovered": 4, 00:16:59.473 "num_base_bdevs_operational": 4, 00:16:59.473 "base_bdevs_list": [ 00:16:59.473 { 00:16:59.473 "name": "pt1", 00:16:59.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.473 "is_configured": true, 00:16:59.473 "data_offset": 2048, 00:16:59.473 "data_size": 63488 00:16:59.473 }, 00:16:59.473 { 00:16:59.473 "name": "pt2", 00:16:59.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.473 "is_configured": true, 00:16:59.473 "data_offset": 2048, 00:16:59.473 "data_size": 63488 00:16:59.473 }, 00:16:59.473 { 00:16:59.473 "name": "pt3", 00:16:59.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.473 "is_configured": true, 00:16:59.473 "data_offset": 2048, 00:16:59.473 "data_size": 63488 00:16:59.473 }, 00:16:59.473 { 00:16:59.473 "name": "pt4", 00:16:59.473 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.473 "is_configured": true, 00:16:59.473 "data_offset": 2048, 00:16:59.473 "data_size": 63488 00:16:59.473 } 00:16:59.473 ] 00:16:59.473 }' 00:16:59.473 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.473 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.733 08:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.733 [2024-12-13 08:28:12.000797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.733 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.733 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.733 "name": "raid_bdev1", 00:16:59.733 "aliases": [ 00:16:59.733 "1f6e29ed-f934-46c7-a6c2-22025ad83d8a" 00:16:59.733 ], 00:16:59.733 "product_name": "Raid Volume", 00:16:59.733 "block_size": 512, 00:16:59.733 "num_blocks": 190464, 00:16:59.733 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:59.733 "assigned_rate_limits": { 00:16:59.733 "rw_ios_per_sec": 0, 00:16:59.733 "rw_mbytes_per_sec": 0, 00:16:59.733 "r_mbytes_per_sec": 0, 00:16:59.733 "w_mbytes_per_sec": 0 00:16:59.733 }, 00:16:59.733 "claimed": false, 00:16:59.733 "zoned": false, 00:16:59.733 "supported_io_types": { 00:16:59.733 "read": true, 00:16:59.733 "write": true, 00:16:59.733 "unmap": false, 00:16:59.733 "flush": false, 00:16:59.733 "reset": true, 00:16:59.733 "nvme_admin": false, 00:16:59.733 "nvme_io": false, 00:16:59.733 "nvme_io_md": false, 00:16:59.733 "write_zeroes": true, 00:16:59.733 "zcopy": false, 00:16:59.734 "get_zone_info": false, 00:16:59.734 "zone_management": false, 00:16:59.734 "zone_append": false, 00:16:59.734 "compare": false, 00:16:59.734 "compare_and_write": false, 00:16:59.734 "abort": false, 00:16:59.734 "seek_hole": false, 00:16:59.734 "seek_data": false, 00:16:59.734 "copy": false, 00:16:59.734 "nvme_iov_md": false 00:16:59.734 }, 00:16:59.734 "driver_specific": { 00:16:59.734 "raid": { 00:16:59.734 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:16:59.734 "strip_size_kb": 64, 00:16:59.734 "state": "online", 00:16:59.734 "raid_level": "raid5f", 00:16:59.734 "superblock": true, 00:16:59.734 "num_base_bdevs": 4, 00:16:59.734 "num_base_bdevs_discovered": 4, 00:16:59.734 "num_base_bdevs_operational": 4, 00:16:59.734 "base_bdevs_list": [ 00:16:59.734 { 00:16:59.734 "name": "pt1", 00:16:59.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.734 "is_configured": true, 00:16:59.734 "data_offset": 2048, 00:16:59.734 "data_size": 63488 00:16:59.734 }, 00:16:59.734 { 00:16:59.734 "name": "pt2", 00:16:59.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.734 "is_configured": true, 00:16:59.734 "data_offset": 2048, 00:16:59.734 "data_size": 63488 00:16:59.734 }, 00:16:59.734 { 00:16:59.734 "name": "pt3", 00:16:59.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.734 "is_configured": true, 00:16:59.734 "data_offset": 2048, 00:16:59.734 "data_size": 63488 00:16:59.734 }, 00:16:59.734 { 00:16:59.734 "name": "pt4", 00:16:59.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.734 "is_configured": true, 00:16:59.734 "data_offset": 2048, 00:16:59.734 "data_size": 63488 00:16:59.734 } 00:16:59.734 ] 00:16:59.734 } 00:16:59.734 } 00:16:59.734 }' 00:16:59.734 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.734 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:59.734 pt2 00:16:59.734 pt3 00:16:59.734 pt4' 00:16:59.734 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:59.994 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 [2024-12-13 08:28:12.344204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1f6e29ed-f934-46c7-a6c2-22025ad83d8a '!=' 1f6e29ed-f934-46c7-a6c2-22025ad83d8a ']' 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.254 [2024-12-13 08:28:12.391961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.254 "name": "raid_bdev1", 00:17:00.254 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:17:00.254 "strip_size_kb": 64, 00:17:00.254 "state": "online", 00:17:00.254 "raid_level": "raid5f", 00:17:00.254 "superblock": true, 00:17:00.254 "num_base_bdevs": 4, 00:17:00.254 "num_base_bdevs_discovered": 3, 00:17:00.254 "num_base_bdevs_operational": 3, 00:17:00.254 "base_bdevs_list": [ 00:17:00.254 { 00:17:00.254 "name": null, 00:17:00.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.254 "is_configured": false, 00:17:00.254 "data_offset": 0, 00:17:00.254 "data_size": 63488 00:17:00.254 }, 00:17:00.254 { 00:17:00.254 "name": "pt2", 00:17:00.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.254 "is_configured": true, 00:17:00.254 "data_offset": 2048, 00:17:00.254 "data_size": 63488 00:17:00.254 }, 00:17:00.254 { 00:17:00.254 "name": "pt3", 00:17:00.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.254 "is_configured": true, 00:17:00.254 "data_offset": 2048, 00:17:00.254 "data_size": 63488 00:17:00.254 }, 00:17:00.254 { 00:17:00.254 "name": "pt4", 00:17:00.254 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.254 "is_configured": true, 00:17:00.254 "data_offset": 2048, 00:17:00.254 "data_size": 63488 00:17:00.254 } 00:17:00.254 ] 00:17:00.254 }' 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.254 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.519 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.519 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.519 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.779 [2024-12-13 08:28:12.883151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.779 [2024-12-13 08:28:12.883238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.779 [2024-12-13 08:28:12.883346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.779 [2024-12-13 08:28:12.883456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.779 [2024-12-13 08:28:12.883523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.779 [2024-12-13 08:28:12.978946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.779 [2024-12-13 08:28:12.979001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.779 [2024-12-13 08:28:12.979019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:00.779 [2024-12-13 08:28:12.979027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.779 [2024-12-13 08:28:12.981318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.779 [2024-12-13 08:28:12.981355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.779 [2024-12-13 08:28:12.981433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:00.779 [2024-12-13 08:28:12.981476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.779 pt2 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.779 08:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.779 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.779 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.779 "name": "raid_bdev1", 00:17:00.779 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:17:00.779 "strip_size_kb": 64, 00:17:00.779 "state": "configuring", 00:17:00.779 "raid_level": "raid5f", 00:17:00.779 "superblock": true, 00:17:00.779 "num_base_bdevs": 4, 00:17:00.779 "num_base_bdevs_discovered": 1, 00:17:00.779 "num_base_bdevs_operational": 3, 00:17:00.779 "base_bdevs_list": [ 00:17:00.779 { 00:17:00.779 "name": null, 00:17:00.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.779 "is_configured": false, 00:17:00.779 "data_offset": 2048, 00:17:00.779 "data_size": 63488 00:17:00.779 }, 00:17:00.779 { 00:17:00.779 "name": "pt2", 00:17:00.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.779 "is_configured": true, 00:17:00.779 "data_offset": 2048, 00:17:00.779 "data_size": 63488 00:17:00.779 }, 00:17:00.779 { 00:17:00.779 "name": null, 00:17:00.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.779 "is_configured": false, 00:17:00.779 "data_offset": 2048, 00:17:00.779 "data_size": 63488 00:17:00.779 }, 00:17:00.779 { 00:17:00.779 "name": null, 00:17:00.779 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.779 "is_configured": false, 00:17:00.779 "data_offset": 2048, 00:17:00.779 "data_size": 63488 00:17:00.779 } 00:17:00.779 ] 00:17:00.779 }' 00:17:00.779 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.780 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.350 [2024-12-13 08:28:13.438255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.350 [2024-12-13 08:28:13.438393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.350 [2024-12-13 08:28:13.438438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:01.350 [2024-12-13 08:28:13.438470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.350 [2024-12-13 08:28:13.438930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.350 [2024-12-13 08:28:13.438989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.350 [2024-12-13 08:28:13.439117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:01.350 [2024-12-13 08:28:13.439169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.350 pt3 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.350 "name": "raid_bdev1", 00:17:01.350 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:17:01.350 "strip_size_kb": 64, 00:17:01.350 "state": "configuring", 00:17:01.350 "raid_level": "raid5f", 00:17:01.350 "superblock": true, 00:17:01.350 "num_base_bdevs": 4, 00:17:01.350 "num_base_bdevs_discovered": 2, 00:17:01.350 "num_base_bdevs_operational": 3, 00:17:01.350 "base_bdevs_list": [ 00:17:01.350 { 00:17:01.350 "name": null, 00:17:01.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.350 "is_configured": false, 00:17:01.350 "data_offset": 2048, 00:17:01.350 "data_size": 63488 00:17:01.350 }, 00:17:01.350 { 00:17:01.350 "name": "pt2", 00:17:01.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.350 "is_configured": true, 00:17:01.350 "data_offset": 2048, 00:17:01.350 "data_size": 63488 00:17:01.350 }, 00:17:01.350 { 00:17:01.350 "name": "pt3", 00:17:01.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.350 "is_configured": true, 00:17:01.350 "data_offset": 2048, 00:17:01.350 "data_size": 63488 00:17:01.350 }, 00:17:01.350 { 00:17:01.350 "name": null, 00:17:01.350 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.350 "is_configured": false, 00:17:01.350 "data_offset": 2048, 00:17:01.350 "data_size": 63488 00:17:01.350 } 00:17:01.350 ] 00:17:01.350 }' 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.350 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.609 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.610 [2024-12-13 08:28:13.865546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:01.610 [2024-12-13 08:28:13.865679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.610 [2024-12-13 08:28:13.865710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:01.610 [2024-12-13 08:28:13.865721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.610 [2024-12-13 08:28:13.866178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.610 [2024-12-13 08:28:13.866199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:01.610 [2024-12-13 08:28:13.866284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:01.610 [2024-12-13 08:28:13.866313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:01.610 [2024-12-13 08:28:13.866454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.610 [2024-12-13 08:28:13.866463] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:01.610 [2024-12-13 08:28:13.866705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:01.610 [2024-12-13 08:28:13.874233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.610 [2024-12-13 08:28:13.874262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:01.610 [2024-12-13 08:28:13.874563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.610 pt4 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.610 "name": "raid_bdev1", 00:17:01.610 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:17:01.610 "strip_size_kb": 64, 00:17:01.610 "state": "online", 00:17:01.610 "raid_level": "raid5f", 00:17:01.610 "superblock": true, 00:17:01.610 "num_base_bdevs": 4, 00:17:01.610 "num_base_bdevs_discovered": 3, 00:17:01.610 "num_base_bdevs_operational": 3, 00:17:01.610 "base_bdevs_list": [ 00:17:01.610 { 00:17:01.610 "name": null, 00:17:01.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.610 "is_configured": false, 00:17:01.610 "data_offset": 2048, 00:17:01.610 "data_size": 63488 00:17:01.610 }, 00:17:01.610 { 00:17:01.610 "name": "pt2", 00:17:01.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.610 "is_configured": true, 00:17:01.610 "data_offset": 2048, 00:17:01.610 "data_size": 63488 00:17:01.610 }, 00:17:01.610 { 00:17:01.610 "name": "pt3", 00:17:01.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.610 "is_configured": true, 00:17:01.610 "data_offset": 2048, 00:17:01.610 "data_size": 63488 00:17:01.610 }, 00:17:01.610 { 00:17:01.610 "name": "pt4", 00:17:01.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.610 "is_configured": true, 00:17:01.610 "data_offset": 2048, 00:17:01.610 "data_size": 63488 00:17:01.610 } 00:17:01.610 ] 00:17:01.610 }' 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.610 08:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 [2024-12-13 08:28:14.351615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.179 [2024-12-13 08:28:14.351648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.179 [2024-12-13 08:28:14.351730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.179 [2024-12-13 08:28:14.351813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.179 [2024-12-13 08:28:14.351826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.179 [2024-12-13 08:28:14.427466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.179 [2024-12-13 08:28:14.427533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.179 [2024-12-13 08:28:14.427560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:02.179 [2024-12-13 08:28:14.427571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.179 [2024-12-13 08:28:14.429989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.179 [2024-12-13 08:28:14.430073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.179 [2024-12-13 08:28:14.430206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.179 [2024-12-13 08:28:14.430270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.179 [2024-12-13 08:28:14.430447] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:02.179 [2024-12-13 08:28:14.430476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.179 [2024-12-13 08:28:14.430491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:02.179 [2024-12-13 08:28:14.430551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.179 [2024-12-13 08:28:14.430650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.179 pt1 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.179 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.180 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.180 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.180 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.180 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.180 "name": "raid_bdev1", 00:17:02.180 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:17:02.180 "strip_size_kb": 64, 00:17:02.180 "state": "configuring", 00:17:02.180 "raid_level": "raid5f", 00:17:02.180 "superblock": true, 00:17:02.180 "num_base_bdevs": 4, 00:17:02.180 "num_base_bdevs_discovered": 2, 00:17:02.180 "num_base_bdevs_operational": 3, 00:17:02.180 "base_bdevs_list": [ 00:17:02.180 { 00:17:02.180 "name": null, 00:17:02.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.180 "is_configured": false, 00:17:02.180 "data_offset": 2048, 00:17:02.180 "data_size": 63488 00:17:02.180 }, 00:17:02.180 { 00:17:02.180 "name": "pt2", 00:17:02.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.180 "is_configured": true, 00:17:02.180 "data_offset": 2048, 00:17:02.180 "data_size": 63488 00:17:02.180 }, 00:17:02.180 { 00:17:02.180 "name": "pt3", 00:17:02.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.180 "is_configured": true, 00:17:02.180 "data_offset": 2048, 00:17:02.180 "data_size": 63488 00:17:02.180 }, 00:17:02.180 { 00:17:02.180 "name": null, 00:17:02.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.180 "is_configured": false, 00:17:02.180 "data_offset": 2048, 00:17:02.180 "data_size": 63488 00:17:02.180 } 00:17:02.180 ] 00:17:02.180 }' 00:17:02.180 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.180 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.749 [2024-12-13 08:28:14.942647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:02.749 [2024-12-13 08:28:14.942715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.749 [2024-12-13 08:28:14.942741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:02.749 [2024-12-13 08:28:14.942752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.749 [2024-12-13 08:28:14.943253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.749 [2024-12-13 08:28:14.943282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:02.749 [2024-12-13 08:28:14.943377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:02.749 [2024-12-13 08:28:14.943401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:02.749 [2024-12-13 08:28:14.943557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:02.749 [2024-12-13 08:28:14.943566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:02.749 [2024-12-13 08:28:14.943862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:02.749 [2024-12-13 08:28:14.952755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:02.749 [2024-12-13 08:28:14.952831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:02.749 [2024-12-13 08:28:14.953186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.749 pt4 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.749 08:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.749 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.749 "name": "raid_bdev1", 00:17:02.749 "uuid": "1f6e29ed-f934-46c7-a6c2-22025ad83d8a", 00:17:02.749 "strip_size_kb": 64, 00:17:02.749 "state": "online", 00:17:02.749 "raid_level": "raid5f", 00:17:02.749 "superblock": true, 00:17:02.749 "num_base_bdevs": 4, 00:17:02.749 "num_base_bdevs_discovered": 3, 00:17:02.749 "num_base_bdevs_operational": 3, 00:17:02.749 "base_bdevs_list": [ 00:17:02.749 { 00:17:02.749 "name": null, 00:17:02.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.749 "is_configured": false, 00:17:02.749 "data_offset": 2048, 00:17:02.749 "data_size": 63488 00:17:02.749 }, 00:17:02.749 { 00:17:02.749 "name": "pt2", 00:17:02.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.749 "is_configured": true, 00:17:02.749 "data_offset": 2048, 00:17:02.749 "data_size": 63488 00:17:02.749 }, 00:17:02.749 { 00:17:02.749 "name": "pt3", 00:17:02.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.749 "is_configured": true, 00:17:02.749 "data_offset": 2048, 00:17:02.749 "data_size": 63488 00:17:02.749 }, 00:17:02.749 { 00:17:02.750 "name": "pt4", 00:17:02.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.750 "is_configured": true, 00:17:02.750 "data_offset": 2048, 00:17:02.750 "data_size": 63488 00:17:02.750 } 00:17:02.750 ] 00:17:02.750 }' 00:17:02.750 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.750 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:03.317 [2024-12-13 08:28:15.458447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1f6e29ed-f934-46c7-a6c2-22025ad83d8a '!=' 1f6e29ed-f934-46c7-a6c2-22025ad83d8a ']' 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84298 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84298 ']' 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84298 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84298 00:17:03.317 killing process with pid 84298 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84298' 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84298 00:17:03.317 [2024-12-13 08:28:15.544663] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.317 [2024-12-13 08:28:15.544756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.317 [2024-12-13 08:28:15.544834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.317 08:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84298 00:17:03.317 [2024-12-13 08:28:15.544848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:03.576 [2024-12-13 08:28:15.934610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.956 08:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:04.956 00:17:04.956 real 0m8.701s 00:17:04.956 user 0m13.717s 00:17:04.956 sys 0m1.581s 00:17:04.956 08:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.956 08:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.956 ************************************ 00:17:04.956 END TEST raid5f_superblock_test 00:17:04.956 ************************************ 00:17:04.956 08:28:17 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:04.956 08:28:17 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:04.956 08:28:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:04.956 08:28:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.956 08:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.956 ************************************ 00:17:04.956 START TEST raid5f_rebuild_test 00:17:04.956 ************************************ 00:17:04.956 08:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:04.956 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:04.956 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84778 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84778 00:17:04.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84778 ']' 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.957 08:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.957 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:04.957 Zero copy mechanism will not be used. 00:17:04.957 [2024-12-13 08:28:17.226092] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:04.957 [2024-12-13 08:28:17.226211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84778 ] 00:17:05.216 [2024-12-13 08:28:17.397517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.216 [2024-12-13 08:28:17.514864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.482 [2024-12-13 08:28:17.716663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.482 [2024-12-13 08:28:17.716724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.752 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.752 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:05.752 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.752 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:05.752 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.752 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 BaseBdev1_malloc 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 [2024-12-13 08:28:18.138402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.012 [2024-12-13 08:28:18.138507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.012 [2024-12-13 08:28:18.138548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.012 [2024-12-13 08:28:18.138560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.012 [2024-12-13 08:28:18.140644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.012 [2024-12-13 08:28:18.140686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.012 BaseBdev1 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 BaseBdev2_malloc 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 [2024-12-13 08:28:18.190982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.012 [2024-12-13 08:28:18.191055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.012 [2024-12-13 08:28:18.191074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.012 [2024-12-13 08:28:18.191085] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.012 [2024-12-13 08:28:18.193183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.012 [2024-12-13 08:28:18.193216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.012 BaseBdev2 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 BaseBdev3_malloc 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.012 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 [2024-12-13 08:28:18.257757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:06.012 [2024-12-13 08:28:18.257811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.013 [2024-12-13 08:28:18.257832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.013 [2024-12-13 08:28:18.257842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.013 [2024-12-13 08:28:18.259844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.013 [2024-12-13 08:28:18.259885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.013 BaseBdev3 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.013 BaseBdev4_malloc 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.013 [2024-12-13 08:28:18.314667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:06.013 [2024-12-13 08:28:18.314745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.013 [2024-12-13 08:28:18.314768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:06.013 [2024-12-13 08:28:18.314780] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.013 [2024-12-13 08:28:18.317020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.013 [2024-12-13 08:28:18.317075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:06.013 BaseBdev4 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.013 spare_malloc 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.013 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.272 spare_delay 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.272 [2024-12-13 08:28:18.383242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.272 [2024-12-13 08:28:18.383347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.272 [2024-12-13 08:28:18.383385] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:06.272 [2024-12-13 08:28:18.383396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.272 [2024-12-13 08:28:18.385460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.272 [2024-12-13 08:28:18.385500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.272 spare 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.272 [2024-12-13 08:28:18.395277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.272 [2024-12-13 08:28:18.397124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.272 [2024-12-13 08:28:18.397196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.272 [2024-12-13 08:28:18.397247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:06.272 [2024-12-13 08:28:18.397338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:06.272 [2024-12-13 08:28:18.397353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:06.272 [2024-12-13 08:28:18.397593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:06.272 [2024-12-13 08:28:18.405035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:06.272 [2024-12-13 08:28:18.405056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:06.272 [2024-12-13 08:28:18.405302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.272 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.273 "name": "raid_bdev1", 00:17:06.273 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:06.273 "strip_size_kb": 64, 00:17:06.273 "state": "online", 00:17:06.273 "raid_level": "raid5f", 00:17:06.273 "superblock": false, 00:17:06.273 "num_base_bdevs": 4, 00:17:06.273 "num_base_bdevs_discovered": 4, 00:17:06.273 "num_base_bdevs_operational": 4, 00:17:06.273 "base_bdevs_list": [ 00:17:06.273 { 00:17:06.273 "name": "BaseBdev1", 00:17:06.273 "uuid": "97070764-dca6-543c-a1e7-1c6bfd1fa2f1", 00:17:06.273 "is_configured": true, 00:17:06.273 "data_offset": 0, 00:17:06.273 "data_size": 65536 00:17:06.273 }, 00:17:06.273 { 00:17:06.273 "name": "BaseBdev2", 00:17:06.273 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:06.273 "is_configured": true, 00:17:06.273 "data_offset": 0, 00:17:06.273 "data_size": 65536 00:17:06.273 }, 00:17:06.273 { 00:17:06.273 "name": "BaseBdev3", 00:17:06.273 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:06.273 "is_configured": true, 00:17:06.273 "data_offset": 0, 00:17:06.273 "data_size": 65536 00:17:06.273 }, 00:17:06.273 { 00:17:06.273 "name": "BaseBdev4", 00:17:06.273 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:06.273 "is_configured": true, 00:17:06.273 "data_offset": 0, 00:17:06.273 "data_size": 65536 00:17:06.273 } 00:17:06.273 ] 00:17:06.273 }' 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.273 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.532 [2024-12-13 08:28:18.829474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.532 08:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:06.792 08:28:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:06.792 [2024-12-13 08:28:19.100840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:06.792 /dev/nbd0 00:17:06.792 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.051 1+0 records in 00:17:07.051 1+0 records out 00:17:07.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608869 s, 6.7 MB/s 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:07.051 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:07.621 512+0 records in 00:17:07.621 512+0 records out 00:17:07.621 100663296 bytes (101 MB, 96 MiB) copied, 0.525732 s, 191 MB/s 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.621 [2024-12-13 08:28:19.928335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.621 [2024-12-13 08:28:19.943038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.621 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.881 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.881 "name": "raid_bdev1", 00:17:07.881 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:07.881 "strip_size_kb": 64, 00:17:07.881 "state": "online", 00:17:07.881 "raid_level": "raid5f", 00:17:07.881 "superblock": false, 00:17:07.881 "num_base_bdevs": 4, 00:17:07.881 "num_base_bdevs_discovered": 3, 00:17:07.881 "num_base_bdevs_operational": 3, 00:17:07.881 "base_bdevs_list": [ 00:17:07.881 { 00:17:07.881 "name": null, 00:17:07.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.881 "is_configured": false, 00:17:07.881 "data_offset": 0, 00:17:07.881 "data_size": 65536 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "name": "BaseBdev2", 00:17:07.881 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:07.881 "is_configured": true, 00:17:07.881 "data_offset": 0, 00:17:07.881 "data_size": 65536 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "name": "BaseBdev3", 00:17:07.881 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:07.881 "is_configured": true, 00:17:07.881 "data_offset": 0, 00:17:07.881 "data_size": 65536 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "name": "BaseBdev4", 00:17:07.881 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:07.881 "is_configured": true, 00:17:07.881 "data_offset": 0, 00:17:07.881 "data_size": 65536 00:17:07.881 } 00:17:07.881 ] 00:17:07.881 }' 00:17:07.881 08:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.881 08:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.140 08:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.140 08:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.140 08:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.140 [2024-12-13 08:28:20.390283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.140 [2024-12-13 08:28:20.406734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:08.140 08:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.140 08:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:08.140 [2024-12-13 08:28:20.416015] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.079 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.339 "name": "raid_bdev1", 00:17:09.339 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:09.339 "strip_size_kb": 64, 00:17:09.339 "state": "online", 00:17:09.339 "raid_level": "raid5f", 00:17:09.339 "superblock": false, 00:17:09.339 "num_base_bdevs": 4, 00:17:09.339 "num_base_bdevs_discovered": 4, 00:17:09.339 "num_base_bdevs_operational": 4, 00:17:09.339 "process": { 00:17:09.339 "type": "rebuild", 00:17:09.339 "target": "spare", 00:17:09.339 "progress": { 00:17:09.339 "blocks": 19200, 00:17:09.339 "percent": 9 00:17:09.339 } 00:17:09.339 }, 00:17:09.339 "base_bdevs_list": [ 00:17:09.339 { 00:17:09.339 "name": "spare", 00:17:09.339 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:09.339 "is_configured": true, 00:17:09.339 "data_offset": 0, 00:17:09.339 "data_size": 65536 00:17:09.339 }, 00:17:09.339 { 00:17:09.339 "name": "BaseBdev2", 00:17:09.339 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:09.339 "is_configured": true, 00:17:09.339 "data_offset": 0, 00:17:09.339 "data_size": 65536 00:17:09.339 }, 00:17:09.339 { 00:17:09.339 "name": "BaseBdev3", 00:17:09.339 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:09.339 "is_configured": true, 00:17:09.339 "data_offset": 0, 00:17:09.339 "data_size": 65536 00:17:09.339 }, 00:17:09.339 { 00:17:09.339 "name": "BaseBdev4", 00:17:09.339 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:09.339 "is_configured": true, 00:17:09.339 "data_offset": 0, 00:17:09.339 "data_size": 65536 00:17:09.339 } 00:17:09.339 ] 00:17:09.339 }' 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.339 [2024-12-13 08:28:21.575502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.339 [2024-12-13 08:28:21.623549] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:09.339 [2024-12-13 08:28:21.623632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.339 [2024-12-13 08:28:21.623652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.339 [2024-12-13 08:28:21.623665] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.339 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.599 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.599 "name": "raid_bdev1", 00:17:09.599 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:09.599 "strip_size_kb": 64, 00:17:09.599 "state": "online", 00:17:09.599 "raid_level": "raid5f", 00:17:09.599 "superblock": false, 00:17:09.599 "num_base_bdevs": 4, 00:17:09.599 "num_base_bdevs_discovered": 3, 00:17:09.599 "num_base_bdevs_operational": 3, 00:17:09.599 "base_bdevs_list": [ 00:17:09.599 { 00:17:09.599 "name": null, 00:17:09.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.599 "is_configured": false, 00:17:09.599 "data_offset": 0, 00:17:09.599 "data_size": 65536 00:17:09.599 }, 00:17:09.599 { 00:17:09.599 "name": "BaseBdev2", 00:17:09.599 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:09.599 "is_configured": true, 00:17:09.599 "data_offset": 0, 00:17:09.599 "data_size": 65536 00:17:09.599 }, 00:17:09.599 { 00:17:09.599 "name": "BaseBdev3", 00:17:09.599 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:09.599 "is_configured": true, 00:17:09.599 "data_offset": 0, 00:17:09.599 "data_size": 65536 00:17:09.599 }, 00:17:09.599 { 00:17:09.599 "name": "BaseBdev4", 00:17:09.599 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:09.599 "is_configured": true, 00:17:09.599 "data_offset": 0, 00:17:09.599 "data_size": 65536 00:17:09.599 } 00:17:09.599 ] 00:17:09.599 }' 00:17:09.599 08:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.599 08:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.858 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.858 "name": "raid_bdev1", 00:17:09.858 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:09.858 "strip_size_kb": 64, 00:17:09.858 "state": "online", 00:17:09.858 "raid_level": "raid5f", 00:17:09.858 "superblock": false, 00:17:09.858 "num_base_bdevs": 4, 00:17:09.858 "num_base_bdevs_discovered": 3, 00:17:09.858 "num_base_bdevs_operational": 3, 00:17:09.858 "base_bdevs_list": [ 00:17:09.858 { 00:17:09.858 "name": null, 00:17:09.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.858 "is_configured": false, 00:17:09.859 "data_offset": 0, 00:17:09.859 "data_size": 65536 00:17:09.859 }, 00:17:09.859 { 00:17:09.859 "name": "BaseBdev2", 00:17:09.859 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:09.859 "is_configured": true, 00:17:09.859 "data_offset": 0, 00:17:09.859 "data_size": 65536 00:17:09.859 }, 00:17:09.859 { 00:17:09.859 "name": "BaseBdev3", 00:17:09.859 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:09.859 "is_configured": true, 00:17:09.859 "data_offset": 0, 00:17:09.859 "data_size": 65536 00:17:09.859 }, 00:17:09.859 { 00:17:09.859 "name": "BaseBdev4", 00:17:09.859 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:09.859 "is_configured": true, 00:17:09.859 "data_offset": 0, 00:17:09.859 "data_size": 65536 00:17:09.859 } 00:17:09.859 ] 00:17:09.859 }' 00:17:09.859 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.859 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.859 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.118 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.118 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.118 08:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.118 08:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.118 [2024-12-13 08:28:22.270363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.118 [2024-12-13 08:28:22.285704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:10.118 08:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.118 08:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:10.118 [2024-12-13 08:28:22.295102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.057 "name": "raid_bdev1", 00:17:11.057 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:11.057 "strip_size_kb": 64, 00:17:11.057 "state": "online", 00:17:11.057 "raid_level": "raid5f", 00:17:11.057 "superblock": false, 00:17:11.057 "num_base_bdevs": 4, 00:17:11.057 "num_base_bdevs_discovered": 4, 00:17:11.057 "num_base_bdevs_operational": 4, 00:17:11.057 "process": { 00:17:11.057 "type": "rebuild", 00:17:11.057 "target": "spare", 00:17:11.057 "progress": { 00:17:11.057 "blocks": 19200, 00:17:11.057 "percent": 9 00:17:11.057 } 00:17:11.057 }, 00:17:11.057 "base_bdevs_list": [ 00:17:11.057 { 00:17:11.057 "name": "spare", 00:17:11.057 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:11.057 "is_configured": true, 00:17:11.057 "data_offset": 0, 00:17:11.057 "data_size": 65536 00:17:11.057 }, 00:17:11.057 { 00:17:11.057 "name": "BaseBdev2", 00:17:11.057 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:11.057 "is_configured": true, 00:17:11.057 "data_offset": 0, 00:17:11.057 "data_size": 65536 00:17:11.057 }, 00:17:11.057 { 00:17:11.057 "name": "BaseBdev3", 00:17:11.057 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:11.057 "is_configured": true, 00:17:11.057 "data_offset": 0, 00:17:11.057 "data_size": 65536 00:17:11.057 }, 00:17:11.057 { 00:17:11.057 "name": "BaseBdev4", 00:17:11.057 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:11.057 "is_configured": true, 00:17:11.057 "data_offset": 0, 00:17:11.057 "data_size": 65536 00:17:11.057 } 00:17:11.057 ] 00:17:11.057 }' 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.057 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=624 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.317 "name": "raid_bdev1", 00:17:11.317 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:11.317 "strip_size_kb": 64, 00:17:11.317 "state": "online", 00:17:11.317 "raid_level": "raid5f", 00:17:11.317 "superblock": false, 00:17:11.317 "num_base_bdevs": 4, 00:17:11.317 "num_base_bdevs_discovered": 4, 00:17:11.317 "num_base_bdevs_operational": 4, 00:17:11.317 "process": { 00:17:11.317 "type": "rebuild", 00:17:11.317 "target": "spare", 00:17:11.317 "progress": { 00:17:11.317 "blocks": 21120, 00:17:11.317 "percent": 10 00:17:11.317 } 00:17:11.317 }, 00:17:11.317 "base_bdevs_list": [ 00:17:11.317 { 00:17:11.317 "name": "spare", 00:17:11.317 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:11.317 "is_configured": true, 00:17:11.317 "data_offset": 0, 00:17:11.317 "data_size": 65536 00:17:11.317 }, 00:17:11.317 { 00:17:11.317 "name": "BaseBdev2", 00:17:11.317 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:11.317 "is_configured": true, 00:17:11.317 "data_offset": 0, 00:17:11.317 "data_size": 65536 00:17:11.317 }, 00:17:11.317 { 00:17:11.317 "name": "BaseBdev3", 00:17:11.317 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:11.317 "is_configured": true, 00:17:11.317 "data_offset": 0, 00:17:11.317 "data_size": 65536 00:17:11.317 }, 00:17:11.317 { 00:17:11.317 "name": "BaseBdev4", 00:17:11.317 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:11.317 "is_configured": true, 00:17:11.317 "data_offset": 0, 00:17:11.317 "data_size": 65536 00:17:11.317 } 00:17:11.317 ] 00:17:11.317 }' 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.317 08:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.255 08:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.515 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.515 "name": "raid_bdev1", 00:17:12.515 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:12.515 "strip_size_kb": 64, 00:17:12.515 "state": "online", 00:17:12.515 "raid_level": "raid5f", 00:17:12.515 "superblock": false, 00:17:12.515 "num_base_bdevs": 4, 00:17:12.515 "num_base_bdevs_discovered": 4, 00:17:12.515 "num_base_bdevs_operational": 4, 00:17:12.515 "process": { 00:17:12.515 "type": "rebuild", 00:17:12.515 "target": "spare", 00:17:12.515 "progress": { 00:17:12.515 "blocks": 42240, 00:17:12.515 "percent": 21 00:17:12.515 } 00:17:12.515 }, 00:17:12.515 "base_bdevs_list": [ 00:17:12.515 { 00:17:12.515 "name": "spare", 00:17:12.515 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:12.515 "is_configured": true, 00:17:12.515 "data_offset": 0, 00:17:12.515 "data_size": 65536 00:17:12.515 }, 00:17:12.515 { 00:17:12.515 "name": "BaseBdev2", 00:17:12.515 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:12.515 "is_configured": true, 00:17:12.515 "data_offset": 0, 00:17:12.515 "data_size": 65536 00:17:12.515 }, 00:17:12.515 { 00:17:12.515 "name": "BaseBdev3", 00:17:12.515 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:12.515 "is_configured": true, 00:17:12.515 "data_offset": 0, 00:17:12.515 "data_size": 65536 00:17:12.515 }, 00:17:12.515 { 00:17:12.515 "name": "BaseBdev4", 00:17:12.515 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:12.515 "is_configured": true, 00:17:12.515 "data_offset": 0, 00:17:12.515 "data_size": 65536 00:17:12.515 } 00:17:12.515 ] 00:17:12.515 }' 00:17:12.515 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.515 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.515 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.515 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.515 08:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.453 "name": "raid_bdev1", 00:17:13.453 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:13.453 "strip_size_kb": 64, 00:17:13.453 "state": "online", 00:17:13.453 "raid_level": "raid5f", 00:17:13.453 "superblock": false, 00:17:13.453 "num_base_bdevs": 4, 00:17:13.453 "num_base_bdevs_discovered": 4, 00:17:13.453 "num_base_bdevs_operational": 4, 00:17:13.453 "process": { 00:17:13.453 "type": "rebuild", 00:17:13.453 "target": "spare", 00:17:13.453 "progress": { 00:17:13.453 "blocks": 65280, 00:17:13.453 "percent": 33 00:17:13.453 } 00:17:13.453 }, 00:17:13.453 "base_bdevs_list": [ 00:17:13.453 { 00:17:13.453 "name": "spare", 00:17:13.453 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:13.453 "is_configured": true, 00:17:13.453 "data_offset": 0, 00:17:13.453 "data_size": 65536 00:17:13.453 }, 00:17:13.453 { 00:17:13.453 "name": "BaseBdev2", 00:17:13.453 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:13.453 "is_configured": true, 00:17:13.453 "data_offset": 0, 00:17:13.453 "data_size": 65536 00:17:13.453 }, 00:17:13.453 { 00:17:13.453 "name": "BaseBdev3", 00:17:13.453 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:13.453 "is_configured": true, 00:17:13.453 "data_offset": 0, 00:17:13.453 "data_size": 65536 00:17:13.453 }, 00:17:13.453 { 00:17:13.453 "name": "BaseBdev4", 00:17:13.453 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:13.453 "is_configured": true, 00:17:13.453 "data_offset": 0, 00:17:13.453 "data_size": 65536 00:17:13.453 } 00:17:13.453 ] 00:17:13.453 }' 00:17:13.453 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.712 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.712 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.712 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.712 08:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.650 "name": "raid_bdev1", 00:17:14.650 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:14.650 "strip_size_kb": 64, 00:17:14.650 "state": "online", 00:17:14.650 "raid_level": "raid5f", 00:17:14.650 "superblock": false, 00:17:14.650 "num_base_bdevs": 4, 00:17:14.650 "num_base_bdevs_discovered": 4, 00:17:14.650 "num_base_bdevs_operational": 4, 00:17:14.650 "process": { 00:17:14.650 "type": "rebuild", 00:17:14.650 "target": "spare", 00:17:14.650 "progress": { 00:17:14.650 "blocks": 86400, 00:17:14.650 "percent": 43 00:17:14.650 } 00:17:14.650 }, 00:17:14.650 "base_bdevs_list": [ 00:17:14.650 { 00:17:14.650 "name": "spare", 00:17:14.650 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:14.650 "is_configured": true, 00:17:14.650 "data_offset": 0, 00:17:14.650 "data_size": 65536 00:17:14.650 }, 00:17:14.650 { 00:17:14.650 "name": "BaseBdev2", 00:17:14.650 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:14.650 "is_configured": true, 00:17:14.650 "data_offset": 0, 00:17:14.650 "data_size": 65536 00:17:14.650 }, 00:17:14.650 { 00:17:14.650 "name": "BaseBdev3", 00:17:14.650 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:14.650 "is_configured": true, 00:17:14.650 "data_offset": 0, 00:17:14.650 "data_size": 65536 00:17:14.650 }, 00:17:14.650 { 00:17:14.650 "name": "BaseBdev4", 00:17:14.650 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:14.650 "is_configured": true, 00:17:14.650 "data_offset": 0, 00:17:14.650 "data_size": 65536 00:17:14.650 } 00:17:14.650 ] 00:17:14.650 }' 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.650 08:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.910 08:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.910 08:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.848 "name": "raid_bdev1", 00:17:15.848 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:15.848 "strip_size_kb": 64, 00:17:15.848 "state": "online", 00:17:15.848 "raid_level": "raid5f", 00:17:15.848 "superblock": false, 00:17:15.848 "num_base_bdevs": 4, 00:17:15.848 "num_base_bdevs_discovered": 4, 00:17:15.848 "num_base_bdevs_operational": 4, 00:17:15.848 "process": { 00:17:15.848 "type": "rebuild", 00:17:15.848 "target": "spare", 00:17:15.848 "progress": { 00:17:15.848 "blocks": 109440, 00:17:15.848 "percent": 55 00:17:15.848 } 00:17:15.848 }, 00:17:15.848 "base_bdevs_list": [ 00:17:15.848 { 00:17:15.848 "name": "spare", 00:17:15.848 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:15.848 "is_configured": true, 00:17:15.848 "data_offset": 0, 00:17:15.848 "data_size": 65536 00:17:15.848 }, 00:17:15.848 { 00:17:15.848 "name": "BaseBdev2", 00:17:15.848 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:15.848 "is_configured": true, 00:17:15.848 "data_offset": 0, 00:17:15.848 "data_size": 65536 00:17:15.848 }, 00:17:15.848 { 00:17:15.848 "name": "BaseBdev3", 00:17:15.848 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:15.848 "is_configured": true, 00:17:15.848 "data_offset": 0, 00:17:15.848 "data_size": 65536 00:17:15.848 }, 00:17:15.848 { 00:17:15.848 "name": "BaseBdev4", 00:17:15.848 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:15.848 "is_configured": true, 00:17:15.848 "data_offset": 0, 00:17:15.848 "data_size": 65536 00:17:15.848 } 00:17:15.848 ] 00:17:15.848 }' 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.848 08:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.228 "name": "raid_bdev1", 00:17:17.228 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:17.228 "strip_size_kb": 64, 00:17:17.228 "state": "online", 00:17:17.228 "raid_level": "raid5f", 00:17:17.228 "superblock": false, 00:17:17.228 "num_base_bdevs": 4, 00:17:17.228 "num_base_bdevs_discovered": 4, 00:17:17.228 "num_base_bdevs_operational": 4, 00:17:17.228 "process": { 00:17:17.228 "type": "rebuild", 00:17:17.228 "target": "spare", 00:17:17.228 "progress": { 00:17:17.228 "blocks": 130560, 00:17:17.228 "percent": 66 00:17:17.228 } 00:17:17.228 }, 00:17:17.228 "base_bdevs_list": [ 00:17:17.228 { 00:17:17.228 "name": "spare", 00:17:17.228 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:17.228 "is_configured": true, 00:17:17.228 "data_offset": 0, 00:17:17.228 "data_size": 65536 00:17:17.228 }, 00:17:17.228 { 00:17:17.228 "name": "BaseBdev2", 00:17:17.228 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:17.228 "is_configured": true, 00:17:17.228 "data_offset": 0, 00:17:17.228 "data_size": 65536 00:17:17.228 }, 00:17:17.228 { 00:17:17.228 "name": "BaseBdev3", 00:17:17.228 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:17.228 "is_configured": true, 00:17:17.228 "data_offset": 0, 00:17:17.228 "data_size": 65536 00:17:17.228 }, 00:17:17.228 { 00:17:17.228 "name": "BaseBdev4", 00:17:17.228 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:17.228 "is_configured": true, 00:17:17.228 "data_offset": 0, 00:17:17.228 "data_size": 65536 00:17:17.228 } 00:17:17.228 ] 00:17:17.228 }' 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.228 08:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.167 "name": "raid_bdev1", 00:17:18.167 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:18.167 "strip_size_kb": 64, 00:17:18.167 "state": "online", 00:17:18.167 "raid_level": "raid5f", 00:17:18.167 "superblock": false, 00:17:18.167 "num_base_bdevs": 4, 00:17:18.167 "num_base_bdevs_discovered": 4, 00:17:18.167 "num_base_bdevs_operational": 4, 00:17:18.167 "process": { 00:17:18.167 "type": "rebuild", 00:17:18.167 "target": "spare", 00:17:18.167 "progress": { 00:17:18.167 "blocks": 153600, 00:17:18.167 "percent": 78 00:17:18.167 } 00:17:18.167 }, 00:17:18.167 "base_bdevs_list": [ 00:17:18.167 { 00:17:18.167 "name": "spare", 00:17:18.167 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 }, 00:17:18.167 { 00:17:18.167 "name": "BaseBdev2", 00:17:18.167 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 }, 00:17:18.167 { 00:17:18.167 "name": "BaseBdev3", 00:17:18.167 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 }, 00:17:18.167 { 00:17:18.167 "name": "BaseBdev4", 00:17:18.167 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 } 00:17:18.167 ] 00:17:18.167 }' 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.167 08:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.546 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.546 "name": "raid_bdev1", 00:17:19.546 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:19.546 "strip_size_kb": 64, 00:17:19.546 "state": "online", 00:17:19.546 "raid_level": "raid5f", 00:17:19.546 "superblock": false, 00:17:19.546 "num_base_bdevs": 4, 00:17:19.546 "num_base_bdevs_discovered": 4, 00:17:19.546 "num_base_bdevs_operational": 4, 00:17:19.546 "process": { 00:17:19.546 "type": "rebuild", 00:17:19.546 "target": "spare", 00:17:19.546 "progress": { 00:17:19.546 "blocks": 174720, 00:17:19.547 "percent": 88 00:17:19.547 } 00:17:19.547 }, 00:17:19.547 "base_bdevs_list": [ 00:17:19.547 { 00:17:19.547 "name": "spare", 00:17:19.547 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:19.547 "is_configured": true, 00:17:19.547 "data_offset": 0, 00:17:19.547 "data_size": 65536 00:17:19.547 }, 00:17:19.547 { 00:17:19.547 "name": "BaseBdev2", 00:17:19.547 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:19.547 "is_configured": true, 00:17:19.547 "data_offset": 0, 00:17:19.547 "data_size": 65536 00:17:19.547 }, 00:17:19.547 { 00:17:19.547 "name": "BaseBdev3", 00:17:19.547 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:19.547 "is_configured": true, 00:17:19.547 "data_offset": 0, 00:17:19.547 "data_size": 65536 00:17:19.547 }, 00:17:19.547 { 00:17:19.547 "name": "BaseBdev4", 00:17:19.547 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:19.547 "is_configured": true, 00:17:19.547 "data_offset": 0, 00:17:19.547 "data_size": 65536 00:17:19.547 } 00:17:19.547 ] 00:17:19.547 }' 00:17:19.547 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.547 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.547 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.547 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.547 08:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.487 [2024-12-13 08:28:32.656361] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:20.487 [2024-12-13 08:28:32.656438] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:20.487 [2024-12-13 08:28:32.656484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.487 "name": "raid_bdev1", 00:17:20.487 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:20.487 "strip_size_kb": 64, 00:17:20.487 "state": "online", 00:17:20.487 "raid_level": "raid5f", 00:17:20.487 "superblock": false, 00:17:20.487 "num_base_bdevs": 4, 00:17:20.487 "num_base_bdevs_discovered": 4, 00:17:20.487 "num_base_bdevs_operational": 4, 00:17:20.487 "base_bdevs_list": [ 00:17:20.487 { 00:17:20.487 "name": "spare", 00:17:20.487 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:20.487 "is_configured": true, 00:17:20.487 "data_offset": 0, 00:17:20.487 "data_size": 65536 00:17:20.487 }, 00:17:20.487 { 00:17:20.487 "name": "BaseBdev2", 00:17:20.487 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:20.487 "is_configured": true, 00:17:20.487 "data_offset": 0, 00:17:20.487 "data_size": 65536 00:17:20.487 }, 00:17:20.487 { 00:17:20.487 "name": "BaseBdev3", 00:17:20.487 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:20.487 "is_configured": true, 00:17:20.487 "data_offset": 0, 00:17:20.487 "data_size": 65536 00:17:20.487 }, 00:17:20.487 { 00:17:20.487 "name": "BaseBdev4", 00:17:20.487 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:20.487 "is_configured": true, 00:17:20.487 "data_offset": 0, 00:17:20.487 "data_size": 65536 00:17:20.487 } 00:17:20.487 ] 00:17:20.487 }' 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.487 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.488 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.488 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.488 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.488 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.488 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.488 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.488 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.760 "name": "raid_bdev1", 00:17:20.760 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:20.760 "strip_size_kb": 64, 00:17:20.760 "state": "online", 00:17:20.760 "raid_level": "raid5f", 00:17:20.760 "superblock": false, 00:17:20.760 "num_base_bdevs": 4, 00:17:20.760 "num_base_bdevs_discovered": 4, 00:17:20.760 "num_base_bdevs_operational": 4, 00:17:20.760 "base_bdevs_list": [ 00:17:20.760 { 00:17:20.760 "name": "spare", 00:17:20.760 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 }, 00:17:20.760 { 00:17:20.760 "name": "BaseBdev2", 00:17:20.760 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 }, 00:17:20.760 { 00:17:20.760 "name": "BaseBdev3", 00:17:20.760 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 }, 00:17:20.760 { 00:17:20.760 "name": "BaseBdev4", 00:17:20.760 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 } 00:17:20.760 ] 00:17:20.760 }' 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.760 08:28:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.760 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.760 "name": "raid_bdev1", 00:17:20.760 "uuid": "f4024379-cf48-4ed1-be9b-ee562a188f3e", 00:17:20.760 "strip_size_kb": 64, 00:17:20.760 "state": "online", 00:17:20.760 "raid_level": "raid5f", 00:17:20.760 "superblock": false, 00:17:20.760 "num_base_bdevs": 4, 00:17:20.760 "num_base_bdevs_discovered": 4, 00:17:20.760 "num_base_bdevs_operational": 4, 00:17:20.760 "base_bdevs_list": [ 00:17:20.760 { 00:17:20.760 "name": "spare", 00:17:20.760 "uuid": "d5a42859-d60a-5fa9-aea3-2022398ab976", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 }, 00:17:20.760 { 00:17:20.760 "name": "BaseBdev2", 00:17:20.760 "uuid": "e3773fee-d088-5a40-882f-86c9efd59b52", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 }, 00:17:20.760 { 00:17:20.760 "name": "BaseBdev3", 00:17:20.760 "uuid": "27046139-088f-5666-95e4-5e41245f4132", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 }, 00:17:20.760 { 00:17:20.760 "name": "BaseBdev4", 00:17:20.760 "uuid": "f0f732f6-5c2f-50c8-9027-65610cf193e4", 00:17:20.760 "is_configured": true, 00:17:20.760 "data_offset": 0, 00:17:20.760 "data_size": 65536 00:17:20.760 } 00:17:20.760 ] 00:17:20.760 }' 00:17:20.760 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.760 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.337 [2024-12-13 08:28:33.453274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.337 [2024-12-13 08:28:33.453315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.337 [2024-12-13 08:28:33.453398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.337 [2024-12-13 08:28:33.453489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.337 [2024-12-13 08:28:33.453500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.337 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:21.337 /dev/nbd0 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.596 1+0 records in 00:17:21.596 1+0 records out 00:17:21.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466079 s, 8.8 MB/s 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.596 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:21.855 /dev/nbd1 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.855 1+0 records in 00:17:21.855 1+0 records out 00:17:21.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281926 s, 14.5 MB/s 00:17:21.855 08:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.855 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.114 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84778 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84778 ']' 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84778 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84778 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.373 killing process with pid 84778 00:17:22.373 Received shutdown signal, test time was about 60.000000 seconds 00:17:22.373 00:17:22.373 Latency(us) 00:17:22.373 [2024-12-13T08:28:34.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.373 [2024-12-13T08:28:34.738Z] =================================================================================================================== 00:17:22.373 [2024-12-13T08:28:34.738Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84778' 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84778 00:17:22.373 [2024-12-13 08:28:34.701870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.373 08:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84778 00:17:22.939 [2024-12-13 08:28:35.187021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:24.313 00:17:24.313 real 0m19.185s 00:17:24.313 user 0m23.140s 00:17:24.313 sys 0m2.305s 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.313 ************************************ 00:17:24.313 END TEST raid5f_rebuild_test 00:17:24.313 ************************************ 00:17:24.313 08:28:36 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:24.313 08:28:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:24.313 08:28:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.313 08:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.313 ************************************ 00:17:24.313 START TEST raid5f_rebuild_test_sb 00:17:24.313 ************************************ 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85283 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85283 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85283 ']' 00:17:24.313 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.314 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.314 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.314 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.314 08:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.314 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:24.314 Zero copy mechanism will not be used. 00:17:24.314 [2024-12-13 08:28:36.492160] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:24.314 [2024-12-13 08:28:36.492295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85283 ] 00:17:24.314 [2024-12-13 08:28:36.667166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.572 [2024-12-13 08:28:36.783454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.830 [2024-12-13 08:28:36.978752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.830 [2024-12-13 08:28:36.978814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.089 BaseBdev1_malloc 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.089 [2024-12-13 08:28:37.371361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.089 [2024-12-13 08:28:37.371423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.089 [2024-12-13 08:28:37.371448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:25.089 [2024-12-13 08:28:37.371459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.089 [2024-12-13 08:28:37.373612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.089 [2024-12-13 08:28:37.373654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.089 BaseBdev1 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.089 BaseBdev2_malloc 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.089 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.089 [2024-12-13 08:28:37.426296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:25.089 [2024-12-13 08:28:37.426358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.089 [2024-12-13 08:28:37.426378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:25.089 [2024-12-13 08:28:37.426389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.089 [2024-12-13 08:28:37.428427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.089 [2024-12-13 08:28:37.428469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:25.090 BaseBdev2 00:17:25.090 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.090 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.090 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:25.090 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.090 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 BaseBdev3_malloc 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 [2024-12-13 08:28:37.496564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:25.350 [2024-12-13 08:28:37.496638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.350 [2024-12-13 08:28:37.496661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:25.350 [2024-12-13 08:28:37.496672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.350 [2024-12-13 08:28:37.498751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.350 [2024-12-13 08:28:37.498791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:25.350 BaseBdev3 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 BaseBdev4_malloc 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 [2024-12-13 08:28:37.553646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:25.350 [2024-12-13 08:28:37.553709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.350 [2024-12-13 08:28:37.553731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:25.350 [2024-12-13 08:28:37.553743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.350 [2024-12-13 08:28:37.555966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.350 [2024-12-13 08:28:37.556012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:25.350 BaseBdev4 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 spare_malloc 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 spare_delay 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 [2024-12-13 08:28:37.619898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.350 [2024-12-13 08:28:37.619955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.350 [2024-12-13 08:28:37.619975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:25.350 [2024-12-13 08:28:37.619986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.350 [2024-12-13 08:28:37.622091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.350 [2024-12-13 08:28:37.622157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.350 spare 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 [2024-12-13 08:28:37.631935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.350 [2024-12-13 08:28:37.633795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.350 [2024-12-13 08:28:37.633863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.350 [2024-12-13 08:28:37.633916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.350 [2024-12-13 08:28:37.634131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:25.350 [2024-12-13 08:28:37.634159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:25.350 [2024-12-13 08:28:37.634417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:25.350 [2024-12-13 08:28:37.641835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:25.350 [2024-12-13 08:28:37.641861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:25.350 [2024-12-13 08:28:37.642054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.350 "name": "raid_bdev1", 00:17:25.350 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:25.350 "strip_size_kb": 64, 00:17:25.350 "state": "online", 00:17:25.350 "raid_level": "raid5f", 00:17:25.350 "superblock": true, 00:17:25.350 "num_base_bdevs": 4, 00:17:25.350 "num_base_bdevs_discovered": 4, 00:17:25.350 "num_base_bdevs_operational": 4, 00:17:25.350 "base_bdevs_list": [ 00:17:25.350 { 00:17:25.350 "name": "BaseBdev1", 00:17:25.350 "uuid": "24eea047-7e48-588f-b050-44ce264a3bf8", 00:17:25.350 "is_configured": true, 00:17:25.350 "data_offset": 2048, 00:17:25.350 "data_size": 63488 00:17:25.350 }, 00:17:25.350 { 00:17:25.350 "name": "BaseBdev2", 00:17:25.350 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:25.350 "is_configured": true, 00:17:25.350 "data_offset": 2048, 00:17:25.350 "data_size": 63488 00:17:25.350 }, 00:17:25.350 { 00:17:25.350 "name": "BaseBdev3", 00:17:25.350 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:25.350 "is_configured": true, 00:17:25.350 "data_offset": 2048, 00:17:25.350 "data_size": 63488 00:17:25.350 }, 00:17:25.350 { 00:17:25.350 "name": "BaseBdev4", 00:17:25.351 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:25.351 "is_configured": true, 00:17:25.351 "data_offset": 2048, 00:17:25.351 "data_size": 63488 00:17:25.351 } 00:17:25.351 ] 00:17:25.351 }' 00:17:25.351 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.351 08:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:25.921 [2024-12-13 08:28:38.114809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.921 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:26.181 [2024-12-13 08:28:38.394178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:26.181 /dev/nbd0 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.181 1+0 records in 00:17:26.181 1+0 records out 00:17:26.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335159 s, 12.2 MB/s 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:26.181 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:26.750 496+0 records in 00:17:26.750 496+0 records out 00:17:26.750 97517568 bytes (98 MB, 93 MiB) copied, 0.48241 s, 202 MB/s 00:17:26.750 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:26.750 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.750 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:26.750 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:26.750 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:26.750 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.750 08:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:27.009 [2024-12-13 08:28:39.182972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.009 [2024-12-13 08:28:39.222008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.009 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.009 "name": "raid_bdev1", 00:17:27.009 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:27.009 "strip_size_kb": 64, 00:17:27.009 "state": "online", 00:17:27.009 "raid_level": "raid5f", 00:17:27.009 "superblock": true, 00:17:27.009 "num_base_bdevs": 4, 00:17:27.009 "num_base_bdevs_discovered": 3, 00:17:27.009 "num_base_bdevs_operational": 3, 00:17:27.009 "base_bdevs_list": [ 00:17:27.009 { 00:17:27.009 "name": null, 00:17:27.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.009 "is_configured": false, 00:17:27.009 "data_offset": 0, 00:17:27.009 "data_size": 63488 00:17:27.009 }, 00:17:27.009 { 00:17:27.009 "name": "BaseBdev2", 00:17:27.009 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:27.009 "is_configured": true, 00:17:27.009 "data_offset": 2048, 00:17:27.009 "data_size": 63488 00:17:27.009 }, 00:17:27.009 { 00:17:27.009 "name": "BaseBdev3", 00:17:27.009 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:27.009 "is_configured": true, 00:17:27.009 "data_offset": 2048, 00:17:27.009 "data_size": 63488 00:17:27.010 }, 00:17:27.010 { 00:17:27.010 "name": "BaseBdev4", 00:17:27.010 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:27.010 "is_configured": true, 00:17:27.010 "data_offset": 2048, 00:17:27.010 "data_size": 63488 00:17:27.010 } 00:17:27.010 ] 00:17:27.010 }' 00:17:27.010 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.010 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.577 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.577 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.577 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.577 [2024-12-13 08:28:39.673243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.577 [2024-12-13 08:28:39.689825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:27.577 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.577 08:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:27.577 [2024-12-13 08:28:39.699717] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.516 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.516 "name": "raid_bdev1", 00:17:28.516 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:28.516 "strip_size_kb": 64, 00:17:28.516 "state": "online", 00:17:28.516 "raid_level": "raid5f", 00:17:28.516 "superblock": true, 00:17:28.516 "num_base_bdevs": 4, 00:17:28.516 "num_base_bdevs_discovered": 4, 00:17:28.516 "num_base_bdevs_operational": 4, 00:17:28.516 "process": { 00:17:28.516 "type": "rebuild", 00:17:28.516 "target": "spare", 00:17:28.516 "progress": { 00:17:28.516 "blocks": 19200, 00:17:28.516 "percent": 10 00:17:28.516 } 00:17:28.516 }, 00:17:28.516 "base_bdevs_list": [ 00:17:28.516 { 00:17:28.516 "name": "spare", 00:17:28.516 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:28.516 "is_configured": true, 00:17:28.516 "data_offset": 2048, 00:17:28.516 "data_size": 63488 00:17:28.516 }, 00:17:28.516 { 00:17:28.516 "name": "BaseBdev2", 00:17:28.516 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:28.516 "is_configured": true, 00:17:28.516 "data_offset": 2048, 00:17:28.516 "data_size": 63488 00:17:28.516 }, 00:17:28.516 { 00:17:28.516 "name": "BaseBdev3", 00:17:28.516 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:28.516 "is_configured": true, 00:17:28.516 "data_offset": 2048, 00:17:28.516 "data_size": 63488 00:17:28.516 }, 00:17:28.516 { 00:17:28.517 "name": "BaseBdev4", 00:17:28.517 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:28.517 "is_configured": true, 00:17:28.517 "data_offset": 2048, 00:17:28.517 "data_size": 63488 00:17:28.517 } 00:17:28.517 ] 00:17:28.517 }' 00:17:28.517 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.517 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.517 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.517 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.517 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:28.517 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.517 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.517 [2024-12-13 08:28:40.835460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.776 [2024-12-13 08:28:40.908510] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.776 [2024-12-13 08:28:40.908607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.776 [2024-12-13 08:28:40.908626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.776 [2024-12-13 08:28:40.908636] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.776 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.777 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.777 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.777 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.777 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.777 "name": "raid_bdev1", 00:17:28.777 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:28.777 "strip_size_kb": 64, 00:17:28.777 "state": "online", 00:17:28.777 "raid_level": "raid5f", 00:17:28.777 "superblock": true, 00:17:28.777 "num_base_bdevs": 4, 00:17:28.777 "num_base_bdevs_discovered": 3, 00:17:28.777 "num_base_bdevs_operational": 3, 00:17:28.777 "base_bdevs_list": [ 00:17:28.777 { 00:17:28.777 "name": null, 00:17:28.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.777 "is_configured": false, 00:17:28.777 "data_offset": 0, 00:17:28.777 "data_size": 63488 00:17:28.777 }, 00:17:28.777 { 00:17:28.777 "name": "BaseBdev2", 00:17:28.777 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:28.777 "is_configured": true, 00:17:28.777 "data_offset": 2048, 00:17:28.777 "data_size": 63488 00:17:28.777 }, 00:17:28.777 { 00:17:28.777 "name": "BaseBdev3", 00:17:28.777 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:28.777 "is_configured": true, 00:17:28.777 "data_offset": 2048, 00:17:28.777 "data_size": 63488 00:17:28.777 }, 00:17:28.777 { 00:17:28.777 "name": "BaseBdev4", 00:17:28.777 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:28.777 "is_configured": true, 00:17:28.777 "data_offset": 2048, 00:17:28.777 "data_size": 63488 00:17:28.777 } 00:17:28.777 ] 00:17:28.777 }' 00:17:28.777 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.777 08:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.345 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.345 "name": "raid_bdev1", 00:17:29.345 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:29.345 "strip_size_kb": 64, 00:17:29.345 "state": "online", 00:17:29.345 "raid_level": "raid5f", 00:17:29.345 "superblock": true, 00:17:29.345 "num_base_bdevs": 4, 00:17:29.345 "num_base_bdevs_discovered": 3, 00:17:29.345 "num_base_bdevs_operational": 3, 00:17:29.345 "base_bdevs_list": [ 00:17:29.345 { 00:17:29.345 "name": null, 00:17:29.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.345 "is_configured": false, 00:17:29.346 "data_offset": 0, 00:17:29.346 "data_size": 63488 00:17:29.346 }, 00:17:29.346 { 00:17:29.346 "name": "BaseBdev2", 00:17:29.346 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:29.346 "is_configured": true, 00:17:29.346 "data_offset": 2048, 00:17:29.346 "data_size": 63488 00:17:29.346 }, 00:17:29.346 { 00:17:29.346 "name": "BaseBdev3", 00:17:29.346 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:29.346 "is_configured": true, 00:17:29.346 "data_offset": 2048, 00:17:29.346 "data_size": 63488 00:17:29.346 }, 00:17:29.346 { 00:17:29.346 "name": "BaseBdev4", 00:17:29.346 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:29.346 "is_configured": true, 00:17:29.346 "data_offset": 2048, 00:17:29.346 "data_size": 63488 00:17:29.346 } 00:17:29.346 ] 00:17:29.346 }' 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.346 [2024-12-13 08:28:41.595447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.346 [2024-12-13 08:28:41.612804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.346 08:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:29.346 [2024-12-13 08:28:41.623669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.284 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.544 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.544 "name": "raid_bdev1", 00:17:30.544 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:30.544 "strip_size_kb": 64, 00:17:30.544 "state": "online", 00:17:30.544 "raid_level": "raid5f", 00:17:30.544 "superblock": true, 00:17:30.544 "num_base_bdevs": 4, 00:17:30.544 "num_base_bdevs_discovered": 4, 00:17:30.544 "num_base_bdevs_operational": 4, 00:17:30.544 "process": { 00:17:30.544 "type": "rebuild", 00:17:30.544 "target": "spare", 00:17:30.544 "progress": { 00:17:30.544 "blocks": 19200, 00:17:30.544 "percent": 10 00:17:30.544 } 00:17:30.544 }, 00:17:30.544 "base_bdevs_list": [ 00:17:30.544 { 00:17:30.544 "name": "spare", 00:17:30.544 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:30.544 "is_configured": true, 00:17:30.544 "data_offset": 2048, 00:17:30.544 "data_size": 63488 00:17:30.544 }, 00:17:30.544 { 00:17:30.544 "name": "BaseBdev2", 00:17:30.544 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:30.544 "is_configured": true, 00:17:30.544 "data_offset": 2048, 00:17:30.544 "data_size": 63488 00:17:30.544 }, 00:17:30.544 { 00:17:30.544 "name": "BaseBdev3", 00:17:30.544 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:30.544 "is_configured": true, 00:17:30.544 "data_offset": 2048, 00:17:30.544 "data_size": 63488 00:17:30.544 }, 00:17:30.544 { 00:17:30.544 "name": "BaseBdev4", 00:17:30.544 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:30.544 "is_configured": true, 00:17:30.545 "data_offset": 2048, 00:17:30.545 "data_size": 63488 00:17:30.545 } 00:17:30.545 ] 00:17:30.545 }' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:30.545 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=643 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.545 "name": "raid_bdev1", 00:17:30.545 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:30.545 "strip_size_kb": 64, 00:17:30.545 "state": "online", 00:17:30.545 "raid_level": "raid5f", 00:17:30.545 "superblock": true, 00:17:30.545 "num_base_bdevs": 4, 00:17:30.545 "num_base_bdevs_discovered": 4, 00:17:30.545 "num_base_bdevs_operational": 4, 00:17:30.545 "process": { 00:17:30.545 "type": "rebuild", 00:17:30.545 "target": "spare", 00:17:30.545 "progress": { 00:17:30.545 "blocks": 21120, 00:17:30.545 "percent": 11 00:17:30.545 } 00:17:30.545 }, 00:17:30.545 "base_bdevs_list": [ 00:17:30.545 { 00:17:30.545 "name": "spare", 00:17:30.545 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:30.545 "is_configured": true, 00:17:30.545 "data_offset": 2048, 00:17:30.545 "data_size": 63488 00:17:30.545 }, 00:17:30.545 { 00:17:30.545 "name": "BaseBdev2", 00:17:30.545 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:30.545 "is_configured": true, 00:17:30.545 "data_offset": 2048, 00:17:30.545 "data_size": 63488 00:17:30.545 }, 00:17:30.545 { 00:17:30.545 "name": "BaseBdev3", 00:17:30.545 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:30.545 "is_configured": true, 00:17:30.545 "data_offset": 2048, 00:17:30.545 "data_size": 63488 00:17:30.545 }, 00:17:30.545 { 00:17:30.545 "name": "BaseBdev4", 00:17:30.545 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:30.545 "is_configured": true, 00:17:30.545 "data_offset": 2048, 00:17:30.545 "data_size": 63488 00:17:30.545 } 00:17:30.545 ] 00:17:30.545 }' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.545 08:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.948 "name": "raid_bdev1", 00:17:31.948 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:31.948 "strip_size_kb": 64, 00:17:31.948 "state": "online", 00:17:31.948 "raid_level": "raid5f", 00:17:31.948 "superblock": true, 00:17:31.948 "num_base_bdevs": 4, 00:17:31.948 "num_base_bdevs_discovered": 4, 00:17:31.948 "num_base_bdevs_operational": 4, 00:17:31.948 "process": { 00:17:31.948 "type": "rebuild", 00:17:31.948 "target": "spare", 00:17:31.948 "progress": { 00:17:31.948 "blocks": 42240, 00:17:31.948 "percent": 22 00:17:31.948 } 00:17:31.948 }, 00:17:31.948 "base_bdevs_list": [ 00:17:31.948 { 00:17:31.948 "name": "spare", 00:17:31.948 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:31.948 "is_configured": true, 00:17:31.948 "data_offset": 2048, 00:17:31.948 "data_size": 63488 00:17:31.948 }, 00:17:31.948 { 00:17:31.948 "name": "BaseBdev2", 00:17:31.948 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:31.948 "is_configured": true, 00:17:31.948 "data_offset": 2048, 00:17:31.948 "data_size": 63488 00:17:31.948 }, 00:17:31.948 { 00:17:31.948 "name": "BaseBdev3", 00:17:31.948 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:31.948 "is_configured": true, 00:17:31.948 "data_offset": 2048, 00:17:31.948 "data_size": 63488 00:17:31.948 }, 00:17:31.948 { 00:17:31.948 "name": "BaseBdev4", 00:17:31.948 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:31.948 "is_configured": true, 00:17:31.948 "data_offset": 2048, 00:17:31.948 "data_size": 63488 00:17:31.948 } 00:17:31.948 ] 00:17:31.948 }' 00:17:31.948 08:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.948 08:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.948 08:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.948 08:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.948 08:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.890 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.890 "name": "raid_bdev1", 00:17:32.890 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:32.890 "strip_size_kb": 64, 00:17:32.890 "state": "online", 00:17:32.890 "raid_level": "raid5f", 00:17:32.890 "superblock": true, 00:17:32.890 "num_base_bdevs": 4, 00:17:32.890 "num_base_bdevs_discovered": 4, 00:17:32.891 "num_base_bdevs_operational": 4, 00:17:32.891 "process": { 00:17:32.891 "type": "rebuild", 00:17:32.891 "target": "spare", 00:17:32.891 "progress": { 00:17:32.891 "blocks": 65280, 00:17:32.891 "percent": 34 00:17:32.891 } 00:17:32.891 }, 00:17:32.891 "base_bdevs_list": [ 00:17:32.891 { 00:17:32.891 "name": "spare", 00:17:32.891 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:32.891 "is_configured": true, 00:17:32.891 "data_offset": 2048, 00:17:32.891 "data_size": 63488 00:17:32.891 }, 00:17:32.891 { 00:17:32.891 "name": "BaseBdev2", 00:17:32.891 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:32.891 "is_configured": true, 00:17:32.891 "data_offset": 2048, 00:17:32.891 "data_size": 63488 00:17:32.891 }, 00:17:32.891 { 00:17:32.891 "name": "BaseBdev3", 00:17:32.891 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:32.891 "is_configured": true, 00:17:32.891 "data_offset": 2048, 00:17:32.891 "data_size": 63488 00:17:32.891 }, 00:17:32.891 { 00:17:32.891 "name": "BaseBdev4", 00:17:32.891 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:32.891 "is_configured": true, 00:17:32.891 "data_offset": 2048, 00:17:32.891 "data_size": 63488 00:17:32.891 } 00:17:32.891 ] 00:17:32.891 }' 00:17:32.891 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.891 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.891 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.891 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.891 08:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.830 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.089 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.089 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.089 "name": "raid_bdev1", 00:17:34.089 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:34.089 "strip_size_kb": 64, 00:17:34.089 "state": "online", 00:17:34.089 "raid_level": "raid5f", 00:17:34.089 "superblock": true, 00:17:34.089 "num_base_bdevs": 4, 00:17:34.089 "num_base_bdevs_discovered": 4, 00:17:34.089 "num_base_bdevs_operational": 4, 00:17:34.089 "process": { 00:17:34.089 "type": "rebuild", 00:17:34.089 "target": "spare", 00:17:34.089 "progress": { 00:17:34.089 "blocks": 86400, 00:17:34.089 "percent": 45 00:17:34.089 } 00:17:34.089 }, 00:17:34.089 "base_bdevs_list": [ 00:17:34.089 { 00:17:34.089 "name": "spare", 00:17:34.089 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:34.089 "is_configured": true, 00:17:34.089 "data_offset": 2048, 00:17:34.089 "data_size": 63488 00:17:34.089 }, 00:17:34.089 { 00:17:34.089 "name": "BaseBdev2", 00:17:34.089 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:34.089 "is_configured": true, 00:17:34.089 "data_offset": 2048, 00:17:34.089 "data_size": 63488 00:17:34.089 }, 00:17:34.089 { 00:17:34.089 "name": "BaseBdev3", 00:17:34.089 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:34.089 "is_configured": true, 00:17:34.089 "data_offset": 2048, 00:17:34.089 "data_size": 63488 00:17:34.089 }, 00:17:34.089 { 00:17:34.089 "name": "BaseBdev4", 00:17:34.089 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:34.089 "is_configured": true, 00:17:34.089 "data_offset": 2048, 00:17:34.089 "data_size": 63488 00:17:34.089 } 00:17:34.089 ] 00:17:34.089 }' 00:17:34.089 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.089 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.089 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.089 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.089 08:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.027 "name": "raid_bdev1", 00:17:35.027 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:35.027 "strip_size_kb": 64, 00:17:35.027 "state": "online", 00:17:35.027 "raid_level": "raid5f", 00:17:35.027 "superblock": true, 00:17:35.027 "num_base_bdevs": 4, 00:17:35.027 "num_base_bdevs_discovered": 4, 00:17:35.027 "num_base_bdevs_operational": 4, 00:17:35.027 "process": { 00:17:35.027 "type": "rebuild", 00:17:35.027 "target": "spare", 00:17:35.027 "progress": { 00:17:35.027 "blocks": 107520, 00:17:35.027 "percent": 56 00:17:35.027 } 00:17:35.027 }, 00:17:35.027 "base_bdevs_list": [ 00:17:35.027 { 00:17:35.027 "name": "spare", 00:17:35.027 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:35.027 "is_configured": true, 00:17:35.027 "data_offset": 2048, 00:17:35.027 "data_size": 63488 00:17:35.027 }, 00:17:35.027 { 00:17:35.027 "name": "BaseBdev2", 00:17:35.027 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:35.027 "is_configured": true, 00:17:35.027 "data_offset": 2048, 00:17:35.027 "data_size": 63488 00:17:35.027 }, 00:17:35.027 { 00:17:35.027 "name": "BaseBdev3", 00:17:35.027 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:35.027 "is_configured": true, 00:17:35.027 "data_offset": 2048, 00:17:35.027 "data_size": 63488 00:17:35.027 }, 00:17:35.027 { 00:17:35.027 "name": "BaseBdev4", 00:17:35.027 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:35.027 "is_configured": true, 00:17:35.027 "data_offset": 2048, 00:17:35.027 "data_size": 63488 00:17:35.027 } 00:17:35.027 ] 00:17:35.027 }' 00:17:35.027 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.287 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.287 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.287 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.287 08:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.225 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.225 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.225 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.225 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.225 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.225 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.225 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.226 "name": "raid_bdev1", 00:17:36.226 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:36.226 "strip_size_kb": 64, 00:17:36.226 "state": "online", 00:17:36.226 "raid_level": "raid5f", 00:17:36.226 "superblock": true, 00:17:36.226 "num_base_bdevs": 4, 00:17:36.226 "num_base_bdevs_discovered": 4, 00:17:36.226 "num_base_bdevs_operational": 4, 00:17:36.226 "process": { 00:17:36.226 "type": "rebuild", 00:17:36.226 "target": "spare", 00:17:36.226 "progress": { 00:17:36.226 "blocks": 130560, 00:17:36.226 "percent": 68 00:17:36.226 } 00:17:36.226 }, 00:17:36.226 "base_bdevs_list": [ 00:17:36.226 { 00:17:36.226 "name": "spare", 00:17:36.226 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:36.226 "is_configured": true, 00:17:36.226 "data_offset": 2048, 00:17:36.226 "data_size": 63488 00:17:36.226 }, 00:17:36.226 { 00:17:36.226 "name": "BaseBdev2", 00:17:36.226 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:36.226 "is_configured": true, 00:17:36.226 "data_offset": 2048, 00:17:36.226 "data_size": 63488 00:17:36.226 }, 00:17:36.226 { 00:17:36.226 "name": "BaseBdev3", 00:17:36.226 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:36.226 "is_configured": true, 00:17:36.226 "data_offset": 2048, 00:17:36.226 "data_size": 63488 00:17:36.226 }, 00:17:36.226 { 00:17:36.226 "name": "BaseBdev4", 00:17:36.226 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:36.226 "is_configured": true, 00:17:36.226 "data_offset": 2048, 00:17:36.226 "data_size": 63488 00:17:36.226 } 00:17:36.226 ] 00:17:36.226 }' 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.226 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.485 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.485 08:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.423 "name": "raid_bdev1", 00:17:37.423 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:37.423 "strip_size_kb": 64, 00:17:37.423 "state": "online", 00:17:37.423 "raid_level": "raid5f", 00:17:37.423 "superblock": true, 00:17:37.423 "num_base_bdevs": 4, 00:17:37.423 "num_base_bdevs_discovered": 4, 00:17:37.423 "num_base_bdevs_operational": 4, 00:17:37.423 "process": { 00:17:37.423 "type": "rebuild", 00:17:37.423 "target": "spare", 00:17:37.423 "progress": { 00:17:37.423 "blocks": 151680, 00:17:37.423 "percent": 79 00:17:37.423 } 00:17:37.423 }, 00:17:37.423 "base_bdevs_list": [ 00:17:37.423 { 00:17:37.423 "name": "spare", 00:17:37.423 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:37.423 "is_configured": true, 00:17:37.423 "data_offset": 2048, 00:17:37.423 "data_size": 63488 00:17:37.423 }, 00:17:37.423 { 00:17:37.423 "name": "BaseBdev2", 00:17:37.423 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:37.423 "is_configured": true, 00:17:37.423 "data_offset": 2048, 00:17:37.423 "data_size": 63488 00:17:37.423 }, 00:17:37.423 { 00:17:37.423 "name": "BaseBdev3", 00:17:37.423 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:37.423 "is_configured": true, 00:17:37.423 "data_offset": 2048, 00:17:37.423 "data_size": 63488 00:17:37.423 }, 00:17:37.423 { 00:17:37.423 "name": "BaseBdev4", 00:17:37.423 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:37.423 "is_configured": true, 00:17:37.423 "data_offset": 2048, 00:17:37.423 "data_size": 63488 00:17:37.423 } 00:17:37.423 ] 00:17:37.423 }' 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.423 08:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.804 "name": "raid_bdev1", 00:17:38.804 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:38.804 "strip_size_kb": 64, 00:17:38.804 "state": "online", 00:17:38.804 "raid_level": "raid5f", 00:17:38.804 "superblock": true, 00:17:38.804 "num_base_bdevs": 4, 00:17:38.804 "num_base_bdevs_discovered": 4, 00:17:38.804 "num_base_bdevs_operational": 4, 00:17:38.804 "process": { 00:17:38.804 "type": "rebuild", 00:17:38.804 "target": "spare", 00:17:38.804 "progress": { 00:17:38.804 "blocks": 172800, 00:17:38.804 "percent": 90 00:17:38.804 } 00:17:38.804 }, 00:17:38.804 "base_bdevs_list": [ 00:17:38.804 { 00:17:38.804 "name": "spare", 00:17:38.804 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:38.804 "is_configured": true, 00:17:38.804 "data_offset": 2048, 00:17:38.804 "data_size": 63488 00:17:38.804 }, 00:17:38.804 { 00:17:38.804 "name": "BaseBdev2", 00:17:38.804 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:38.804 "is_configured": true, 00:17:38.804 "data_offset": 2048, 00:17:38.804 "data_size": 63488 00:17:38.804 }, 00:17:38.804 { 00:17:38.804 "name": "BaseBdev3", 00:17:38.804 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:38.804 "is_configured": true, 00:17:38.804 "data_offset": 2048, 00:17:38.804 "data_size": 63488 00:17:38.804 }, 00:17:38.804 { 00:17:38.804 "name": "BaseBdev4", 00:17:38.804 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:38.804 "is_configured": true, 00:17:38.804 "data_offset": 2048, 00:17:38.804 "data_size": 63488 00:17:38.804 } 00:17:38.804 ] 00:17:38.804 }' 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.804 08:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.373 [2024-12-13 08:28:51.693091] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:39.373 [2024-12-13 08:28:51.693209] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:39.373 [2024-12-13 08:28:51.693355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.632 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.633 "name": "raid_bdev1", 00:17:39.633 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:39.633 "strip_size_kb": 64, 00:17:39.633 "state": "online", 00:17:39.633 "raid_level": "raid5f", 00:17:39.633 "superblock": true, 00:17:39.633 "num_base_bdevs": 4, 00:17:39.633 "num_base_bdevs_discovered": 4, 00:17:39.633 "num_base_bdevs_operational": 4, 00:17:39.633 "base_bdevs_list": [ 00:17:39.633 { 00:17:39.633 "name": "spare", 00:17:39.633 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:39.633 "is_configured": true, 00:17:39.633 "data_offset": 2048, 00:17:39.633 "data_size": 63488 00:17:39.633 }, 00:17:39.633 { 00:17:39.633 "name": "BaseBdev2", 00:17:39.633 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:39.633 "is_configured": true, 00:17:39.633 "data_offset": 2048, 00:17:39.633 "data_size": 63488 00:17:39.633 }, 00:17:39.633 { 00:17:39.633 "name": "BaseBdev3", 00:17:39.633 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:39.633 "is_configured": true, 00:17:39.633 "data_offset": 2048, 00:17:39.633 "data_size": 63488 00:17:39.633 }, 00:17:39.633 { 00:17:39.633 "name": "BaseBdev4", 00:17:39.633 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:39.633 "is_configured": true, 00:17:39.633 "data_offset": 2048, 00:17:39.633 "data_size": 63488 00:17:39.633 } 00:17:39.633 ] 00:17:39.633 }' 00:17:39.633 08:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.892 "name": "raid_bdev1", 00:17:39.892 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:39.892 "strip_size_kb": 64, 00:17:39.892 "state": "online", 00:17:39.892 "raid_level": "raid5f", 00:17:39.892 "superblock": true, 00:17:39.892 "num_base_bdevs": 4, 00:17:39.892 "num_base_bdevs_discovered": 4, 00:17:39.892 "num_base_bdevs_operational": 4, 00:17:39.892 "base_bdevs_list": [ 00:17:39.892 { 00:17:39.892 "name": "spare", 00:17:39.892 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:39.892 "is_configured": true, 00:17:39.892 "data_offset": 2048, 00:17:39.892 "data_size": 63488 00:17:39.892 }, 00:17:39.892 { 00:17:39.892 "name": "BaseBdev2", 00:17:39.892 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:39.892 "is_configured": true, 00:17:39.892 "data_offset": 2048, 00:17:39.892 "data_size": 63488 00:17:39.892 }, 00:17:39.892 { 00:17:39.892 "name": "BaseBdev3", 00:17:39.892 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:39.892 "is_configured": true, 00:17:39.892 "data_offset": 2048, 00:17:39.892 "data_size": 63488 00:17:39.892 }, 00:17:39.892 { 00:17:39.892 "name": "BaseBdev4", 00:17:39.892 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:39.892 "is_configured": true, 00:17:39.892 "data_offset": 2048, 00:17:39.892 "data_size": 63488 00:17:39.892 } 00:17:39.892 ] 00:17:39.892 }' 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.892 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.152 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.152 "name": "raid_bdev1", 00:17:40.152 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:40.152 "strip_size_kb": 64, 00:17:40.152 "state": "online", 00:17:40.152 "raid_level": "raid5f", 00:17:40.152 "superblock": true, 00:17:40.152 "num_base_bdevs": 4, 00:17:40.152 "num_base_bdevs_discovered": 4, 00:17:40.152 "num_base_bdevs_operational": 4, 00:17:40.152 "base_bdevs_list": [ 00:17:40.152 { 00:17:40.152 "name": "spare", 00:17:40.152 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:40.152 "is_configured": true, 00:17:40.152 "data_offset": 2048, 00:17:40.152 "data_size": 63488 00:17:40.152 }, 00:17:40.152 { 00:17:40.152 "name": "BaseBdev2", 00:17:40.152 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:40.152 "is_configured": true, 00:17:40.152 "data_offset": 2048, 00:17:40.152 "data_size": 63488 00:17:40.152 }, 00:17:40.152 { 00:17:40.152 "name": "BaseBdev3", 00:17:40.152 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:40.152 "is_configured": true, 00:17:40.152 "data_offset": 2048, 00:17:40.152 "data_size": 63488 00:17:40.152 }, 00:17:40.152 { 00:17:40.152 "name": "BaseBdev4", 00:17:40.152 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:40.152 "is_configured": true, 00:17:40.152 "data_offset": 2048, 00:17:40.152 "data_size": 63488 00:17:40.152 } 00:17:40.152 ] 00:17:40.152 }' 00:17:40.152 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.152 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.411 [2024-12-13 08:28:52.666650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.411 [2024-12-13 08:28:52.666689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.411 [2024-12-13 08:28:52.666778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.411 [2024-12-13 08:28:52.666886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.411 [2024-12-13 08:28:52.666916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:40.411 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:40.671 /dev/nbd0 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.671 1+0 records in 00:17:40.671 1+0 records out 00:17:40.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189593 s, 21.6 MB/s 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:40.671 08:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:40.930 /dev/nbd1 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.930 1+0 records in 00:17:40.930 1+0 records out 00:17:40.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278466 s, 14.7 MB/s 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:40.930 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:41.189 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:41.189 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.189 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.189 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:41.189 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:41.189 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.189 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:41.450 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.451 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.713 [2024-12-13 08:28:53.898451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:41.713 [2024-12-13 08:28:53.898512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.713 [2024-12-13 08:28:53.898537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:41.713 [2024-12-13 08:28:53.898546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.713 [2024-12-13 08:28:53.901199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.713 [2024-12-13 08:28:53.901242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:41.713 [2024-12-13 08:28:53.901314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:41.713 [2024-12-13 08:28:53.901370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.713 [2024-12-13 08:28:53.901574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.713 [2024-12-13 08:28:53.901719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.713 [2024-12-13 08:28:53.901828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:41.713 spare 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.713 08:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.713 [2024-12-13 08:28:54.001762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:41.713 [2024-12-13 08:28:54.001810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:41.713 [2024-12-13 08:28:54.002169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:41.713 [2024-12-13 08:28:54.010334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:41.713 [2024-12-13 08:28:54.010374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:41.713 [2024-12-13 08:28:54.010606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.713 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.713 "name": "raid_bdev1", 00:17:41.713 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:41.713 "strip_size_kb": 64, 00:17:41.713 "state": "online", 00:17:41.713 "raid_level": "raid5f", 00:17:41.713 "superblock": true, 00:17:41.713 "num_base_bdevs": 4, 00:17:41.713 "num_base_bdevs_discovered": 4, 00:17:41.713 "num_base_bdevs_operational": 4, 00:17:41.713 "base_bdevs_list": [ 00:17:41.713 { 00:17:41.713 "name": "spare", 00:17:41.713 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:41.713 "is_configured": true, 00:17:41.713 "data_offset": 2048, 00:17:41.713 "data_size": 63488 00:17:41.713 }, 00:17:41.713 { 00:17:41.713 "name": "BaseBdev2", 00:17:41.713 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:41.713 "is_configured": true, 00:17:41.713 "data_offset": 2048, 00:17:41.713 "data_size": 63488 00:17:41.713 }, 00:17:41.714 { 00:17:41.714 "name": "BaseBdev3", 00:17:41.714 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:41.714 "is_configured": true, 00:17:41.714 "data_offset": 2048, 00:17:41.714 "data_size": 63488 00:17:41.714 }, 00:17:41.714 { 00:17:41.714 "name": "BaseBdev4", 00:17:41.714 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:41.714 "is_configured": true, 00:17:41.714 "data_offset": 2048, 00:17:41.714 "data_size": 63488 00:17:41.714 } 00:17:41.714 ] 00:17:41.714 }' 00:17:41.714 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.714 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.283 "name": "raid_bdev1", 00:17:42.283 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:42.283 "strip_size_kb": 64, 00:17:42.283 "state": "online", 00:17:42.283 "raid_level": "raid5f", 00:17:42.283 "superblock": true, 00:17:42.283 "num_base_bdevs": 4, 00:17:42.283 "num_base_bdevs_discovered": 4, 00:17:42.283 "num_base_bdevs_operational": 4, 00:17:42.283 "base_bdevs_list": [ 00:17:42.283 { 00:17:42.283 "name": "spare", 00:17:42.283 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:42.283 "is_configured": true, 00:17:42.283 "data_offset": 2048, 00:17:42.283 "data_size": 63488 00:17:42.283 }, 00:17:42.283 { 00:17:42.283 "name": "BaseBdev2", 00:17:42.283 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:42.283 "is_configured": true, 00:17:42.283 "data_offset": 2048, 00:17:42.283 "data_size": 63488 00:17:42.283 }, 00:17:42.283 { 00:17:42.283 "name": "BaseBdev3", 00:17:42.283 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:42.283 "is_configured": true, 00:17:42.283 "data_offset": 2048, 00:17:42.283 "data_size": 63488 00:17:42.283 }, 00:17:42.283 { 00:17:42.283 "name": "BaseBdev4", 00:17:42.283 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:42.283 "is_configured": true, 00:17:42.283 "data_offset": 2048, 00:17:42.283 "data_size": 63488 00:17:42.283 } 00:17:42.283 ] 00:17:42.283 }' 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.283 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.284 [2024-12-13 08:28:54.635123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.284 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.543 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.543 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.543 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.543 "name": "raid_bdev1", 00:17:42.543 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:42.543 "strip_size_kb": 64, 00:17:42.543 "state": "online", 00:17:42.543 "raid_level": "raid5f", 00:17:42.543 "superblock": true, 00:17:42.543 "num_base_bdevs": 4, 00:17:42.543 "num_base_bdevs_discovered": 3, 00:17:42.544 "num_base_bdevs_operational": 3, 00:17:42.544 "base_bdevs_list": [ 00:17:42.544 { 00:17:42.544 "name": null, 00:17:42.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.544 "is_configured": false, 00:17:42.544 "data_offset": 0, 00:17:42.544 "data_size": 63488 00:17:42.544 }, 00:17:42.544 { 00:17:42.544 "name": "BaseBdev2", 00:17:42.544 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:42.544 "is_configured": true, 00:17:42.544 "data_offset": 2048, 00:17:42.544 "data_size": 63488 00:17:42.544 }, 00:17:42.544 { 00:17:42.544 "name": "BaseBdev3", 00:17:42.544 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:42.544 "is_configured": true, 00:17:42.544 "data_offset": 2048, 00:17:42.544 "data_size": 63488 00:17:42.544 }, 00:17:42.544 { 00:17:42.544 "name": "BaseBdev4", 00:17:42.544 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:42.544 "is_configured": true, 00:17:42.544 "data_offset": 2048, 00:17:42.544 "data_size": 63488 00:17:42.544 } 00:17:42.544 ] 00:17:42.544 }' 00:17:42.544 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.544 08:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.803 08:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.803 08:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.803 08:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.803 [2024-12-13 08:28:55.058426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.803 [2024-12-13 08:28:55.058654] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.804 [2024-12-13 08:28:55.058681] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:42.804 [2024-12-13 08:28:55.058724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.804 [2024-12-13 08:28:55.075546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:42.804 08:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.804 08:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:42.804 [2024-12-13 08:28:55.086392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.741 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.741 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.741 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.741 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.742 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.742 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.742 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.742 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.742 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.001 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.001 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.001 "name": "raid_bdev1", 00:17:44.001 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:44.001 "strip_size_kb": 64, 00:17:44.001 "state": "online", 00:17:44.001 "raid_level": "raid5f", 00:17:44.001 "superblock": true, 00:17:44.001 "num_base_bdevs": 4, 00:17:44.001 "num_base_bdevs_discovered": 4, 00:17:44.001 "num_base_bdevs_operational": 4, 00:17:44.001 "process": { 00:17:44.001 "type": "rebuild", 00:17:44.001 "target": "spare", 00:17:44.001 "progress": { 00:17:44.001 "blocks": 19200, 00:17:44.001 "percent": 10 00:17:44.001 } 00:17:44.001 }, 00:17:44.001 "base_bdevs_list": [ 00:17:44.001 { 00:17:44.001 "name": "spare", 00:17:44.001 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:44.001 "is_configured": true, 00:17:44.001 "data_offset": 2048, 00:17:44.001 "data_size": 63488 00:17:44.001 }, 00:17:44.001 { 00:17:44.001 "name": "BaseBdev2", 00:17:44.001 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:44.001 "is_configured": true, 00:17:44.001 "data_offset": 2048, 00:17:44.001 "data_size": 63488 00:17:44.001 }, 00:17:44.001 { 00:17:44.001 "name": "BaseBdev3", 00:17:44.001 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:44.001 "is_configured": true, 00:17:44.001 "data_offset": 2048, 00:17:44.001 "data_size": 63488 00:17:44.002 }, 00:17:44.002 { 00:17:44.002 "name": "BaseBdev4", 00:17:44.002 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:44.002 "is_configured": true, 00:17:44.002 "data_offset": 2048, 00:17:44.002 "data_size": 63488 00:17:44.002 } 00:17:44.002 ] 00:17:44.002 }' 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.002 [2024-12-13 08:28:56.237631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.002 [2024-12-13 08:28:56.294867] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:44.002 [2024-12-13 08:28:56.294945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.002 [2024-12-13 08:28:56.294964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.002 [2024-12-13 08:28:56.294974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.002 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.261 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.261 "name": "raid_bdev1", 00:17:44.261 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:44.261 "strip_size_kb": 64, 00:17:44.261 "state": "online", 00:17:44.261 "raid_level": "raid5f", 00:17:44.261 "superblock": true, 00:17:44.261 "num_base_bdevs": 4, 00:17:44.261 "num_base_bdevs_discovered": 3, 00:17:44.261 "num_base_bdevs_operational": 3, 00:17:44.261 "base_bdevs_list": [ 00:17:44.261 { 00:17:44.261 "name": null, 00:17:44.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.261 "is_configured": false, 00:17:44.261 "data_offset": 0, 00:17:44.261 "data_size": 63488 00:17:44.261 }, 00:17:44.261 { 00:17:44.261 "name": "BaseBdev2", 00:17:44.261 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:44.261 "is_configured": true, 00:17:44.261 "data_offset": 2048, 00:17:44.261 "data_size": 63488 00:17:44.261 }, 00:17:44.261 { 00:17:44.261 "name": "BaseBdev3", 00:17:44.261 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:44.261 "is_configured": true, 00:17:44.261 "data_offset": 2048, 00:17:44.261 "data_size": 63488 00:17:44.261 }, 00:17:44.261 { 00:17:44.261 "name": "BaseBdev4", 00:17:44.261 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:44.261 "is_configured": true, 00:17:44.261 "data_offset": 2048, 00:17:44.261 "data_size": 63488 00:17:44.261 } 00:17:44.261 ] 00:17:44.261 }' 00:17:44.261 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.261 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.521 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.521 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.521 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.521 [2024-12-13 08:28:56.737328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.521 [2024-12-13 08:28:56.737408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.521 [2024-12-13 08:28:56.737438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:44.521 [2024-12-13 08:28:56.737451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.521 [2024-12-13 08:28:56.737986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.521 [2024-12-13 08:28:56.738021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.521 [2024-12-13 08:28:56.738142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.521 [2024-12-13 08:28:56.738161] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.521 [2024-12-13 08:28:56.738172] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:44.521 [2024-12-13 08:28:56.738196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.521 [2024-12-13 08:28:56.753087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:44.521 spare 00:17:44.521 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.521 08:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:44.521 [2024-12-13 08:28:56.761985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.462 "name": "raid_bdev1", 00:17:45.462 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:45.462 "strip_size_kb": 64, 00:17:45.462 "state": "online", 00:17:45.462 "raid_level": "raid5f", 00:17:45.462 "superblock": true, 00:17:45.462 "num_base_bdevs": 4, 00:17:45.462 "num_base_bdevs_discovered": 4, 00:17:45.462 "num_base_bdevs_operational": 4, 00:17:45.462 "process": { 00:17:45.462 "type": "rebuild", 00:17:45.462 "target": "spare", 00:17:45.462 "progress": { 00:17:45.462 "blocks": 19200, 00:17:45.462 "percent": 10 00:17:45.462 } 00:17:45.462 }, 00:17:45.462 "base_bdevs_list": [ 00:17:45.462 { 00:17:45.462 "name": "spare", 00:17:45.462 "uuid": "4b2ec9df-786d-5f98-af25-9427fc475f95", 00:17:45.462 "is_configured": true, 00:17:45.462 "data_offset": 2048, 00:17:45.462 "data_size": 63488 00:17:45.462 }, 00:17:45.462 { 00:17:45.462 "name": "BaseBdev2", 00:17:45.462 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:45.462 "is_configured": true, 00:17:45.462 "data_offset": 2048, 00:17:45.462 "data_size": 63488 00:17:45.462 }, 00:17:45.462 { 00:17:45.462 "name": "BaseBdev3", 00:17:45.462 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:45.462 "is_configured": true, 00:17:45.462 "data_offset": 2048, 00:17:45.462 "data_size": 63488 00:17:45.462 }, 00:17:45.462 { 00:17:45.462 "name": "BaseBdev4", 00:17:45.462 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:45.462 "is_configured": true, 00:17:45.462 "data_offset": 2048, 00:17:45.462 "data_size": 63488 00:17:45.462 } 00:17:45.462 ] 00:17:45.462 }' 00:17:45.462 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.723 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.723 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.723 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.723 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:45.723 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.723 08:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.723 [2024-12-13 08:28:57.921321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.723 [2024-12-13 08:28:57.970061] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.723 [2024-12-13 08:28:57.970161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.723 [2024-12-13 08:28:57.970184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.723 [2024-12-13 08:28:57.970193] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.723 "name": "raid_bdev1", 00:17:45.723 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:45.723 "strip_size_kb": 64, 00:17:45.723 "state": "online", 00:17:45.723 "raid_level": "raid5f", 00:17:45.723 "superblock": true, 00:17:45.723 "num_base_bdevs": 4, 00:17:45.723 "num_base_bdevs_discovered": 3, 00:17:45.723 "num_base_bdevs_operational": 3, 00:17:45.723 "base_bdevs_list": [ 00:17:45.723 { 00:17:45.723 "name": null, 00:17:45.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.723 "is_configured": false, 00:17:45.723 "data_offset": 0, 00:17:45.723 "data_size": 63488 00:17:45.723 }, 00:17:45.723 { 00:17:45.723 "name": "BaseBdev2", 00:17:45.723 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:45.723 "is_configured": true, 00:17:45.723 "data_offset": 2048, 00:17:45.723 "data_size": 63488 00:17:45.723 }, 00:17:45.723 { 00:17:45.723 "name": "BaseBdev3", 00:17:45.723 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:45.723 "is_configured": true, 00:17:45.723 "data_offset": 2048, 00:17:45.723 "data_size": 63488 00:17:45.723 }, 00:17:45.723 { 00:17:45.723 "name": "BaseBdev4", 00:17:45.723 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:45.723 "is_configured": true, 00:17:45.723 "data_offset": 2048, 00:17:45.723 "data_size": 63488 00:17:45.723 } 00:17:45.723 ] 00:17:45.723 }' 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.723 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.294 "name": "raid_bdev1", 00:17:46.294 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:46.294 "strip_size_kb": 64, 00:17:46.294 "state": "online", 00:17:46.294 "raid_level": "raid5f", 00:17:46.294 "superblock": true, 00:17:46.294 "num_base_bdevs": 4, 00:17:46.294 "num_base_bdevs_discovered": 3, 00:17:46.294 "num_base_bdevs_operational": 3, 00:17:46.294 "base_bdevs_list": [ 00:17:46.294 { 00:17:46.294 "name": null, 00:17:46.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.294 "is_configured": false, 00:17:46.294 "data_offset": 0, 00:17:46.294 "data_size": 63488 00:17:46.294 }, 00:17:46.294 { 00:17:46.294 "name": "BaseBdev2", 00:17:46.294 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:46.294 "is_configured": true, 00:17:46.294 "data_offset": 2048, 00:17:46.294 "data_size": 63488 00:17:46.294 }, 00:17:46.294 { 00:17:46.294 "name": "BaseBdev3", 00:17:46.294 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:46.294 "is_configured": true, 00:17:46.294 "data_offset": 2048, 00:17:46.294 "data_size": 63488 00:17:46.294 }, 00:17:46.294 { 00:17:46.294 "name": "BaseBdev4", 00:17:46.294 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:46.294 "is_configured": true, 00:17:46.294 "data_offset": 2048, 00:17:46.294 "data_size": 63488 00:17:46.294 } 00:17:46.294 ] 00:17:46.294 }' 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.294 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.294 [2024-12-13 08:28:58.632814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:46.294 [2024-12-13 08:28:58.632886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.294 [2024-12-13 08:28:58.632913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:46.294 [2024-12-13 08:28:58.632923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.294 [2024-12-13 08:28:58.633490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.294 [2024-12-13 08:28:58.633523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:46.294 [2024-12-13 08:28:58.633618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:46.295 [2024-12-13 08:28:58.633634] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.295 [2024-12-13 08:28:58.633647] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:46.295 [2024-12-13 08:28:58.633662] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:46.295 BaseBdev1 00:17:46.295 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.295 08:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.678 "name": "raid_bdev1", 00:17:47.678 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:47.678 "strip_size_kb": 64, 00:17:47.678 "state": "online", 00:17:47.678 "raid_level": "raid5f", 00:17:47.678 "superblock": true, 00:17:47.678 "num_base_bdevs": 4, 00:17:47.678 "num_base_bdevs_discovered": 3, 00:17:47.678 "num_base_bdevs_operational": 3, 00:17:47.678 "base_bdevs_list": [ 00:17:47.678 { 00:17:47.678 "name": null, 00:17:47.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.678 "is_configured": false, 00:17:47.678 "data_offset": 0, 00:17:47.678 "data_size": 63488 00:17:47.678 }, 00:17:47.678 { 00:17:47.678 "name": "BaseBdev2", 00:17:47.678 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:47.678 "is_configured": true, 00:17:47.678 "data_offset": 2048, 00:17:47.678 "data_size": 63488 00:17:47.678 }, 00:17:47.678 { 00:17:47.678 "name": "BaseBdev3", 00:17:47.678 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:47.678 "is_configured": true, 00:17:47.678 "data_offset": 2048, 00:17:47.678 "data_size": 63488 00:17:47.678 }, 00:17:47.678 { 00:17:47.678 "name": "BaseBdev4", 00:17:47.678 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:47.678 "is_configured": true, 00:17:47.678 "data_offset": 2048, 00:17:47.678 "data_size": 63488 00:17:47.678 } 00:17:47.678 ] 00:17:47.678 }' 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.678 08:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.938 "name": "raid_bdev1", 00:17:47.938 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:47.938 "strip_size_kb": 64, 00:17:47.938 "state": "online", 00:17:47.938 "raid_level": "raid5f", 00:17:47.938 "superblock": true, 00:17:47.938 "num_base_bdevs": 4, 00:17:47.938 "num_base_bdevs_discovered": 3, 00:17:47.938 "num_base_bdevs_operational": 3, 00:17:47.938 "base_bdevs_list": [ 00:17:47.938 { 00:17:47.938 "name": null, 00:17:47.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.938 "is_configured": false, 00:17:47.938 "data_offset": 0, 00:17:47.938 "data_size": 63488 00:17:47.938 }, 00:17:47.938 { 00:17:47.938 "name": "BaseBdev2", 00:17:47.938 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:47.938 "is_configured": true, 00:17:47.938 "data_offset": 2048, 00:17:47.938 "data_size": 63488 00:17:47.938 }, 00:17:47.938 { 00:17:47.938 "name": "BaseBdev3", 00:17:47.938 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:47.938 "is_configured": true, 00:17:47.938 "data_offset": 2048, 00:17:47.938 "data_size": 63488 00:17:47.938 }, 00:17:47.938 { 00:17:47.938 "name": "BaseBdev4", 00:17:47.938 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:47.938 "is_configured": true, 00:17:47.938 "data_offset": 2048, 00:17:47.938 "data_size": 63488 00:17:47.938 } 00:17:47.938 ] 00:17:47.938 }' 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:47.938 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.939 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.939 [2024-12-13 08:29:00.254299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.939 [2024-12-13 08:29:00.254513] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.939 [2024-12-13 08:29:00.254542] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:47.939 request: 00:17:47.939 { 00:17:47.939 "base_bdev": "BaseBdev1", 00:17:47.939 "raid_bdev": "raid_bdev1", 00:17:47.939 "method": "bdev_raid_add_base_bdev", 00:17:47.939 "req_id": 1 00:17:47.939 } 00:17:47.939 Got JSON-RPC error response 00:17:47.939 response: 00:17:47.939 { 00:17:47.939 "code": -22, 00:17:47.939 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:47.939 } 00:17:47.939 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:47.939 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:47.939 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.939 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.939 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.939 08:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:49.318 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:49.318 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.318 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.319 "name": "raid_bdev1", 00:17:49.319 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:49.319 "strip_size_kb": 64, 00:17:49.319 "state": "online", 00:17:49.319 "raid_level": "raid5f", 00:17:49.319 "superblock": true, 00:17:49.319 "num_base_bdevs": 4, 00:17:49.319 "num_base_bdevs_discovered": 3, 00:17:49.319 "num_base_bdevs_operational": 3, 00:17:49.319 "base_bdevs_list": [ 00:17:49.319 { 00:17:49.319 "name": null, 00:17:49.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.319 "is_configured": false, 00:17:49.319 "data_offset": 0, 00:17:49.319 "data_size": 63488 00:17:49.319 }, 00:17:49.319 { 00:17:49.319 "name": "BaseBdev2", 00:17:49.319 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:49.319 "is_configured": true, 00:17:49.319 "data_offset": 2048, 00:17:49.319 "data_size": 63488 00:17:49.319 }, 00:17:49.319 { 00:17:49.319 "name": "BaseBdev3", 00:17:49.319 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:49.319 "is_configured": true, 00:17:49.319 "data_offset": 2048, 00:17:49.319 "data_size": 63488 00:17:49.319 }, 00:17:49.319 { 00:17:49.319 "name": "BaseBdev4", 00:17:49.319 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:49.319 "is_configured": true, 00:17:49.319 "data_offset": 2048, 00:17:49.319 "data_size": 63488 00:17:49.319 } 00:17:49.319 ] 00:17:49.319 }' 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.319 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.578 "name": "raid_bdev1", 00:17:49.578 "uuid": "f0939416-e278-4140-9afa-e2f8f93b4988", 00:17:49.578 "strip_size_kb": 64, 00:17:49.578 "state": "online", 00:17:49.578 "raid_level": "raid5f", 00:17:49.578 "superblock": true, 00:17:49.578 "num_base_bdevs": 4, 00:17:49.578 "num_base_bdevs_discovered": 3, 00:17:49.578 "num_base_bdevs_operational": 3, 00:17:49.578 "base_bdevs_list": [ 00:17:49.578 { 00:17:49.578 "name": null, 00:17:49.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.578 "is_configured": false, 00:17:49.578 "data_offset": 0, 00:17:49.578 "data_size": 63488 00:17:49.578 }, 00:17:49.578 { 00:17:49.578 "name": "BaseBdev2", 00:17:49.578 "uuid": "9b12ca33-3fd1-5a1a-b357-3bd2ba6bb8c1", 00:17:49.578 "is_configured": true, 00:17:49.578 "data_offset": 2048, 00:17:49.578 "data_size": 63488 00:17:49.578 }, 00:17:49.578 { 00:17:49.578 "name": "BaseBdev3", 00:17:49.578 "uuid": "7b4ed27c-7f76-53f0-b2c4-9de8f5782cd7", 00:17:49.578 "is_configured": true, 00:17:49.578 "data_offset": 2048, 00:17:49.578 "data_size": 63488 00:17:49.578 }, 00:17:49.578 { 00:17:49.578 "name": "BaseBdev4", 00:17:49.578 "uuid": "94c68ea7-72fa-5aa7-855d-e7e2bcb93b62", 00:17:49.578 "is_configured": true, 00:17:49.578 "data_offset": 2048, 00:17:49.578 "data_size": 63488 00:17:49.578 } 00:17:49.578 ] 00:17:49.578 }' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85283 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85283 ']' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85283 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85283 00:17:49.578 killing process with pid 85283 00:17:49.578 Received shutdown signal, test time was about 60.000000 seconds 00:17:49.578 00:17:49.578 Latency(us) 00:17:49.578 [2024-12-13T08:29:01.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.578 [2024-12-13T08:29:01.943Z] =================================================================================================================== 00:17:49.578 [2024-12-13T08:29:01.943Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85283' 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85283 00:17:49.578 [2024-12-13 08:29:01.881004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.578 [2024-12-13 08:29:01.881169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.578 08:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85283 00:17:49.578 [2024-12-13 08:29:01.881267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.578 [2024-12-13 08:29:01.881282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:50.148 [2024-12-13 08:29:02.363383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.087 08:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:51.087 00:17:51.087 real 0m27.061s 00:17:51.087 user 0m34.079s 00:17:51.087 sys 0m3.005s 00:17:51.087 08:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.087 08:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.087 ************************************ 00:17:51.087 END TEST raid5f_rebuild_test_sb 00:17:51.087 ************************************ 00:17:51.346 08:29:03 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:51.346 08:29:03 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:51.347 08:29:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:51.347 08:29:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.347 08:29:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 ************************************ 00:17:51.347 START TEST raid_state_function_test_sb_4k 00:17:51.347 ************************************ 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86093 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:51.347 Process raid pid: 86093 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86093' 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86093 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86093 ']' 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.347 08:29:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 [2024-12-13 08:29:03.611059] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:51.347 [2024-12-13 08:29:03.611189] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.606 [2024-12-13 08:29:03.786181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.606 [2024-12-13 08:29:03.907834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.865 [2024-12-13 08:29:04.113576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.865 [2024-12-13 08:29:04.113618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 [2024-12-13 08:29:04.445944] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.124 [2024-12-13 08:29:04.445995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.124 [2024-12-13 08:29:04.446005] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.124 [2024-12-13 08:29:04.446014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.124 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.387 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.387 "name": "Existed_Raid", 00:17:52.387 "uuid": "6dcba839-cba9-4d54-aede-1584e8c3960e", 00:17:52.387 "strip_size_kb": 0, 00:17:52.387 "state": "configuring", 00:17:52.387 "raid_level": "raid1", 00:17:52.387 "superblock": true, 00:17:52.387 "num_base_bdevs": 2, 00:17:52.387 "num_base_bdevs_discovered": 0, 00:17:52.387 "num_base_bdevs_operational": 2, 00:17:52.387 "base_bdevs_list": [ 00:17:52.387 { 00:17:52.387 "name": "BaseBdev1", 00:17:52.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.387 "is_configured": false, 00:17:52.387 "data_offset": 0, 00:17:52.387 "data_size": 0 00:17:52.387 }, 00:17:52.387 { 00:17:52.387 "name": "BaseBdev2", 00:17:52.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.387 "is_configured": false, 00:17:52.387 "data_offset": 0, 00:17:52.387 "data_size": 0 00:17:52.387 } 00:17:52.387 ] 00:17:52.387 }' 00:17:52.387 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.387 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.660 [2024-12-13 08:29:04.945025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.660 [2024-12-13 08:29:04.945064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.660 [2024-12-13 08:29:04.956993] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.660 [2024-12-13 08:29:04.957032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.660 [2024-12-13 08:29:04.957041] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.660 [2024-12-13 08:29:04.957052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.660 08:29:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.660 [2024-12-13 08:29:05.003811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.660 BaseBdev1 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.660 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.932 [ 00:17:52.932 { 00:17:52.932 "name": "BaseBdev1", 00:17:52.932 "aliases": [ 00:17:52.932 "a4fae4c3-f3f1-4720-b12d-25669c93b980" 00:17:52.932 ], 00:17:52.932 "product_name": "Malloc disk", 00:17:52.932 "block_size": 4096, 00:17:52.932 "num_blocks": 8192, 00:17:52.932 "uuid": "a4fae4c3-f3f1-4720-b12d-25669c93b980", 00:17:52.932 "assigned_rate_limits": { 00:17:52.932 "rw_ios_per_sec": 0, 00:17:52.932 "rw_mbytes_per_sec": 0, 00:17:52.932 "r_mbytes_per_sec": 0, 00:17:52.932 "w_mbytes_per_sec": 0 00:17:52.932 }, 00:17:52.932 "claimed": true, 00:17:52.932 "claim_type": "exclusive_write", 00:17:52.932 "zoned": false, 00:17:52.932 "supported_io_types": { 00:17:52.932 "read": true, 00:17:52.932 "write": true, 00:17:52.932 "unmap": true, 00:17:52.932 "flush": true, 00:17:52.932 "reset": true, 00:17:52.932 "nvme_admin": false, 00:17:52.932 "nvme_io": false, 00:17:52.932 "nvme_io_md": false, 00:17:52.932 "write_zeroes": true, 00:17:52.932 "zcopy": true, 00:17:52.932 "get_zone_info": false, 00:17:52.932 "zone_management": false, 00:17:52.932 "zone_append": false, 00:17:52.932 "compare": false, 00:17:52.932 "compare_and_write": false, 00:17:52.932 "abort": true, 00:17:52.932 "seek_hole": false, 00:17:52.932 "seek_data": false, 00:17:52.932 "copy": true, 00:17:52.932 "nvme_iov_md": false 00:17:52.932 }, 00:17:52.932 "memory_domains": [ 00:17:52.932 { 00:17:52.932 "dma_device_id": "system", 00:17:52.932 "dma_device_type": 1 00:17:52.932 }, 00:17:52.932 { 00:17:52.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.932 "dma_device_type": 2 00:17:52.932 } 00:17:52.932 ], 00:17:52.932 "driver_specific": {} 00:17:52.932 } 00:17:52.932 ] 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.932 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.933 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.933 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.933 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.933 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.933 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.933 "name": "Existed_Raid", 00:17:52.933 "uuid": "c12ead43-6163-4f05-880f-8dd5b07a1993", 00:17:52.933 "strip_size_kb": 0, 00:17:52.933 "state": "configuring", 00:17:52.933 "raid_level": "raid1", 00:17:52.933 "superblock": true, 00:17:52.933 "num_base_bdevs": 2, 00:17:52.933 "num_base_bdevs_discovered": 1, 00:17:52.933 "num_base_bdevs_operational": 2, 00:17:52.933 "base_bdevs_list": [ 00:17:52.933 { 00:17:52.933 "name": "BaseBdev1", 00:17:52.933 "uuid": "a4fae4c3-f3f1-4720-b12d-25669c93b980", 00:17:52.933 "is_configured": true, 00:17:52.933 "data_offset": 256, 00:17:52.933 "data_size": 7936 00:17:52.933 }, 00:17:52.933 { 00:17:52.933 "name": "BaseBdev2", 00:17:52.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.933 "is_configured": false, 00:17:52.933 "data_offset": 0, 00:17:52.933 "data_size": 0 00:17:52.933 } 00:17:52.933 ] 00:17:52.933 }' 00:17:52.933 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.933 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.195 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.195 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.195 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.195 [2024-12-13 08:29:05.538996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.195 [2024-12-13 08:29:05.539132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:53.195 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.195 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:53.195 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.195 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.195 [2024-12-13 08:29:05.551010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.195 [2024-12-13 08:29:05.552925] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.195 [2024-12-13 08:29:05.552973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.454 "name": "Existed_Raid", 00:17:53.454 "uuid": "f6c2a43a-3c44-454c-b189-5c24990df517", 00:17:53.454 "strip_size_kb": 0, 00:17:53.454 "state": "configuring", 00:17:53.454 "raid_level": "raid1", 00:17:53.454 "superblock": true, 00:17:53.454 "num_base_bdevs": 2, 00:17:53.454 "num_base_bdevs_discovered": 1, 00:17:53.454 "num_base_bdevs_operational": 2, 00:17:53.454 "base_bdevs_list": [ 00:17:53.454 { 00:17:53.454 "name": "BaseBdev1", 00:17:53.454 "uuid": "a4fae4c3-f3f1-4720-b12d-25669c93b980", 00:17:53.454 "is_configured": true, 00:17:53.454 "data_offset": 256, 00:17:53.454 "data_size": 7936 00:17:53.454 }, 00:17:53.454 { 00:17:53.454 "name": "BaseBdev2", 00:17:53.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.454 "is_configured": false, 00:17:53.454 "data_offset": 0, 00:17:53.454 "data_size": 0 00:17:53.454 } 00:17:53.454 ] 00:17:53.454 }' 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.454 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.714 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:53.714 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.714 08:29:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.714 [2024-12-13 08:29:06.025635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.714 [2024-12-13 08:29:06.026001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:53.714 [2024-12-13 08:29:06.026057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.714 [2024-12-13 08:29:06.026348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:53.714 [2024-12-13 08:29:06.026572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:53.714 [2024-12-13 08:29:06.026622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:17:53.714 id_bdev 0x617000007e80 00:17:53.714 [2024-12-13 08:29:06.026813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.714 [ 00:17:53.714 { 00:17:53.714 "name": "BaseBdev2", 00:17:53.714 "aliases": [ 00:17:53.714 "0f419327-eb16-4afe-9873-10867bcd40cd" 00:17:53.714 ], 00:17:53.714 "product_name": "Malloc disk", 00:17:53.714 "block_size": 4096, 00:17:53.714 "num_blocks": 8192, 00:17:53.714 "uuid": "0f419327-eb16-4afe-9873-10867bcd40cd", 00:17:53.714 "assigned_rate_limits": { 00:17:53.714 "rw_ios_per_sec": 0, 00:17:53.714 "rw_mbytes_per_sec": 0, 00:17:53.714 "r_mbytes_per_sec": 0, 00:17:53.714 "w_mbytes_per_sec": 0 00:17:53.714 }, 00:17:53.714 "claimed": true, 00:17:53.714 "claim_type": "exclusive_write", 00:17:53.714 "zoned": false, 00:17:53.714 "supported_io_types": { 00:17:53.714 "read": true, 00:17:53.714 "write": true, 00:17:53.714 "unmap": true, 00:17:53.714 "flush": true, 00:17:53.714 "reset": true, 00:17:53.714 "nvme_admin": false, 00:17:53.714 "nvme_io": false, 00:17:53.714 "nvme_io_md": false, 00:17:53.714 "write_zeroes": true, 00:17:53.714 "zcopy": true, 00:17:53.714 "get_zone_info": false, 00:17:53.714 "zone_management": false, 00:17:53.714 "zone_append": false, 00:17:53.714 "compare": false, 00:17:53.714 "compare_and_write": false, 00:17:53.714 "abort": true, 00:17:53.714 "seek_hole": false, 00:17:53.714 "seek_data": false, 00:17:53.714 "copy": true, 00:17:53.714 "nvme_iov_md": false 00:17:53.714 }, 00:17:53.714 "memory_domains": [ 00:17:53.714 { 00:17:53.714 "dma_device_id": "system", 00:17:53.714 "dma_device_type": 1 00:17:53.714 }, 00:17:53.714 { 00:17:53.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.714 "dma_device_type": 2 00:17:53.714 } 00:17:53.714 ], 00:17:53.714 "driver_specific": {} 00:17:53.714 } 00:17:53.714 ] 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:53.714 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.715 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.973 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.974 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.974 "name": "Existed_Raid", 00:17:53.974 "uuid": "f6c2a43a-3c44-454c-b189-5c24990df517", 00:17:53.974 "strip_size_kb": 0, 00:17:53.974 "state": "online", 00:17:53.974 "raid_level": "raid1", 00:17:53.974 "superblock": true, 00:17:53.974 "num_base_bdevs": 2, 00:17:53.974 "num_base_bdevs_discovered": 2, 00:17:53.974 "num_base_bdevs_operational": 2, 00:17:53.974 "base_bdevs_list": [ 00:17:53.974 { 00:17:53.974 "name": "BaseBdev1", 00:17:53.974 "uuid": "a4fae4c3-f3f1-4720-b12d-25669c93b980", 00:17:53.974 "is_configured": true, 00:17:53.974 "data_offset": 256, 00:17:53.974 "data_size": 7936 00:17:53.974 }, 00:17:53.974 { 00:17:53.974 "name": "BaseBdev2", 00:17:53.974 "uuid": "0f419327-eb16-4afe-9873-10867bcd40cd", 00:17:53.974 "is_configured": true, 00:17:53.974 "data_offset": 256, 00:17:53.974 "data_size": 7936 00:17:53.974 } 00:17:53.974 ] 00:17:53.974 }' 00:17:53.974 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.974 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.233 [2024-12-13 08:29:06.497146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.233 "name": "Existed_Raid", 00:17:54.233 "aliases": [ 00:17:54.233 "f6c2a43a-3c44-454c-b189-5c24990df517" 00:17:54.233 ], 00:17:54.233 "product_name": "Raid Volume", 00:17:54.233 "block_size": 4096, 00:17:54.233 "num_blocks": 7936, 00:17:54.233 "uuid": "f6c2a43a-3c44-454c-b189-5c24990df517", 00:17:54.233 "assigned_rate_limits": { 00:17:54.233 "rw_ios_per_sec": 0, 00:17:54.233 "rw_mbytes_per_sec": 0, 00:17:54.233 "r_mbytes_per_sec": 0, 00:17:54.233 "w_mbytes_per_sec": 0 00:17:54.233 }, 00:17:54.233 "claimed": false, 00:17:54.233 "zoned": false, 00:17:54.233 "supported_io_types": { 00:17:54.233 "read": true, 00:17:54.233 "write": true, 00:17:54.233 "unmap": false, 00:17:54.233 "flush": false, 00:17:54.233 "reset": true, 00:17:54.233 "nvme_admin": false, 00:17:54.233 "nvme_io": false, 00:17:54.233 "nvme_io_md": false, 00:17:54.233 "write_zeroes": true, 00:17:54.233 "zcopy": false, 00:17:54.233 "get_zone_info": false, 00:17:54.233 "zone_management": false, 00:17:54.233 "zone_append": false, 00:17:54.233 "compare": false, 00:17:54.233 "compare_and_write": false, 00:17:54.233 "abort": false, 00:17:54.233 "seek_hole": false, 00:17:54.233 "seek_data": false, 00:17:54.233 "copy": false, 00:17:54.233 "nvme_iov_md": false 00:17:54.233 }, 00:17:54.233 "memory_domains": [ 00:17:54.233 { 00:17:54.233 "dma_device_id": "system", 00:17:54.233 "dma_device_type": 1 00:17:54.233 }, 00:17:54.233 { 00:17:54.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.233 "dma_device_type": 2 00:17:54.233 }, 00:17:54.233 { 00:17:54.233 "dma_device_id": "system", 00:17:54.233 "dma_device_type": 1 00:17:54.233 }, 00:17:54.233 { 00:17:54.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.233 "dma_device_type": 2 00:17:54.233 } 00:17:54.233 ], 00:17:54.233 "driver_specific": { 00:17:54.233 "raid": { 00:17:54.233 "uuid": "f6c2a43a-3c44-454c-b189-5c24990df517", 00:17:54.233 "strip_size_kb": 0, 00:17:54.233 "state": "online", 00:17:54.233 "raid_level": "raid1", 00:17:54.233 "superblock": true, 00:17:54.233 "num_base_bdevs": 2, 00:17:54.233 "num_base_bdevs_discovered": 2, 00:17:54.233 "num_base_bdevs_operational": 2, 00:17:54.233 "base_bdevs_list": [ 00:17:54.233 { 00:17:54.233 "name": "BaseBdev1", 00:17:54.233 "uuid": "a4fae4c3-f3f1-4720-b12d-25669c93b980", 00:17:54.233 "is_configured": true, 00:17:54.233 "data_offset": 256, 00:17:54.233 "data_size": 7936 00:17:54.233 }, 00:17:54.233 { 00:17:54.233 "name": "BaseBdev2", 00:17:54.233 "uuid": "0f419327-eb16-4afe-9873-10867bcd40cd", 00:17:54.233 "is_configured": true, 00:17:54.233 "data_offset": 256, 00:17:54.233 "data_size": 7936 00:17:54.233 } 00:17:54.233 ] 00:17:54.233 } 00:17:54.233 } 00:17:54.233 }' 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:54.233 BaseBdev2' 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:54.233 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.492 [2024-12-13 08:29:06.704571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.492 "name": "Existed_Raid", 00:17:54.492 "uuid": "f6c2a43a-3c44-454c-b189-5c24990df517", 00:17:54.492 "strip_size_kb": 0, 00:17:54.492 "state": "online", 00:17:54.492 "raid_level": "raid1", 00:17:54.492 "superblock": true, 00:17:54.492 "num_base_bdevs": 2, 00:17:54.492 "num_base_bdevs_discovered": 1, 00:17:54.492 "num_base_bdevs_operational": 1, 00:17:54.492 "base_bdevs_list": [ 00:17:54.492 { 00:17:54.492 "name": null, 00:17:54.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.492 "is_configured": false, 00:17:54.492 "data_offset": 0, 00:17:54.492 "data_size": 7936 00:17:54.492 }, 00:17:54.492 { 00:17:54.492 "name": "BaseBdev2", 00:17:54.492 "uuid": "0f419327-eb16-4afe-9873-10867bcd40cd", 00:17:54.492 "is_configured": true, 00:17:54.492 "data_offset": 256, 00:17:54.492 "data_size": 7936 00:17:54.492 } 00:17:54.492 ] 00:17:54.492 }' 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.492 08:29:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 [2024-12-13 08:29:07.309963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:55.058 [2024-12-13 08:29:07.310077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.058 [2024-12-13 08:29:07.403932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.058 [2024-12-13 08:29:07.404069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.058 [2024-12-13 08:29:07.404086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86093 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86093 ']' 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86093 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86093 00:17:55.317 killing process with pid 86093 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86093' 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86093 00:17:55.317 [2024-12-13 08:29:07.501972] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.317 08:29:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86093 00:17:55.317 [2024-12-13 08:29:07.519168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.252 08:29:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:56.252 00:17:56.252 real 0m5.091s 00:17:56.253 user 0m7.392s 00:17:56.253 sys 0m0.866s 00:17:56.253 08:29:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.253 08:29:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.253 ************************************ 00:17:56.253 END TEST raid_state_function_test_sb_4k 00:17:56.253 ************************************ 00:17:56.511 08:29:08 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:56.511 08:29:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:56.511 08:29:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.511 08:29:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.511 ************************************ 00:17:56.511 START TEST raid_superblock_test_4k 00:17:56.511 ************************************ 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86346 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86346 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86346 ']' 00:17:56.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.511 08:29:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.511 [2024-12-13 08:29:08.766982] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:56.511 [2024-12-13 08:29:08.767119] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86346 ] 00:17:56.772 [2024-12-13 08:29:08.939346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.772 [2024-12-13 08:29:09.056669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.030 [2024-12-13 08:29:09.257097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.030 [2024-12-13 08:29:09.257161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.289 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 malloc1 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 [2024-12-13 08:29:09.664869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.548 [2024-12-13 08:29:09.664929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.548 [2024-12-13 08:29:09.664950] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:57.548 [2024-12-13 08:29:09.664959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.548 [2024-12-13 08:29:09.667066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.548 [2024-12-13 08:29:09.667115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.548 pt1 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 malloc2 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 [2024-12-13 08:29:09.720252] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.548 [2024-12-13 08:29:09.720359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.548 [2024-12-13 08:29:09.720401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:57.548 [2024-12-13 08:29:09.720465] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.548 [2024-12-13 08:29:09.722580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.548 [2024-12-13 08:29:09.722650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.548 pt2 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 [2024-12-13 08:29:09.732278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.548 [2024-12-13 08:29:09.734151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.548 [2024-12-13 08:29:09.734382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:57.548 [2024-12-13 08:29:09.734434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.548 [2024-12-13 08:29:09.734693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:57.548 [2024-12-13 08:29:09.734896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:57.548 [2024-12-13 08:29:09.734944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:57.548 [2024-12-13 08:29:09.735142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.548 "name": "raid_bdev1", 00:17:57.548 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:57.548 "strip_size_kb": 0, 00:17:57.548 "state": "online", 00:17:57.548 "raid_level": "raid1", 00:17:57.548 "superblock": true, 00:17:57.548 "num_base_bdevs": 2, 00:17:57.548 "num_base_bdevs_discovered": 2, 00:17:57.548 "num_base_bdevs_operational": 2, 00:17:57.548 "base_bdevs_list": [ 00:17:57.548 { 00:17:57.548 "name": "pt1", 00:17:57.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.548 "is_configured": true, 00:17:57.548 "data_offset": 256, 00:17:57.548 "data_size": 7936 00:17:57.548 }, 00:17:57.548 { 00:17:57.548 "name": "pt2", 00:17:57.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.548 "is_configured": true, 00:17:57.548 "data_offset": 256, 00:17:57.548 "data_size": 7936 00:17:57.548 } 00:17:57.548 ] 00:17:57.548 }' 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.548 08:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.807 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.807 [2024-12-13 08:29:10.155814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.065 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.065 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.065 "name": "raid_bdev1", 00:17:58.065 "aliases": [ 00:17:58.065 "e95fcce2-b9ab-4b68-8ed9-dc5108a72653" 00:17:58.065 ], 00:17:58.065 "product_name": "Raid Volume", 00:17:58.065 "block_size": 4096, 00:17:58.065 "num_blocks": 7936, 00:17:58.065 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:58.065 "assigned_rate_limits": { 00:17:58.065 "rw_ios_per_sec": 0, 00:17:58.065 "rw_mbytes_per_sec": 0, 00:17:58.065 "r_mbytes_per_sec": 0, 00:17:58.065 "w_mbytes_per_sec": 0 00:17:58.065 }, 00:17:58.065 "claimed": false, 00:17:58.065 "zoned": false, 00:17:58.065 "supported_io_types": { 00:17:58.065 "read": true, 00:17:58.065 "write": true, 00:17:58.065 "unmap": false, 00:17:58.065 "flush": false, 00:17:58.065 "reset": true, 00:17:58.065 "nvme_admin": false, 00:17:58.065 "nvme_io": false, 00:17:58.065 "nvme_io_md": false, 00:17:58.065 "write_zeroes": true, 00:17:58.065 "zcopy": false, 00:17:58.065 "get_zone_info": false, 00:17:58.065 "zone_management": false, 00:17:58.065 "zone_append": false, 00:17:58.065 "compare": false, 00:17:58.065 "compare_and_write": false, 00:17:58.065 "abort": false, 00:17:58.065 "seek_hole": false, 00:17:58.065 "seek_data": false, 00:17:58.065 "copy": false, 00:17:58.065 "nvme_iov_md": false 00:17:58.066 }, 00:17:58.066 "memory_domains": [ 00:17:58.066 { 00:17:58.066 "dma_device_id": "system", 00:17:58.066 "dma_device_type": 1 00:17:58.066 }, 00:17:58.066 { 00:17:58.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.066 "dma_device_type": 2 00:17:58.066 }, 00:17:58.066 { 00:17:58.066 "dma_device_id": "system", 00:17:58.066 "dma_device_type": 1 00:17:58.066 }, 00:17:58.066 { 00:17:58.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.066 "dma_device_type": 2 00:17:58.066 } 00:17:58.066 ], 00:17:58.066 "driver_specific": { 00:17:58.066 "raid": { 00:17:58.066 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:58.066 "strip_size_kb": 0, 00:17:58.066 "state": "online", 00:17:58.066 "raid_level": "raid1", 00:17:58.066 "superblock": true, 00:17:58.066 "num_base_bdevs": 2, 00:17:58.066 "num_base_bdevs_discovered": 2, 00:17:58.066 "num_base_bdevs_operational": 2, 00:17:58.066 "base_bdevs_list": [ 00:17:58.066 { 00:17:58.066 "name": "pt1", 00:17:58.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.066 "is_configured": true, 00:17:58.066 "data_offset": 256, 00:17:58.066 "data_size": 7936 00:17:58.066 }, 00:17:58.066 { 00:17:58.066 "name": "pt2", 00:17:58.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.066 "is_configured": true, 00:17:58.066 "data_offset": 256, 00:17:58.066 "data_size": 7936 00:17:58.066 } 00:17:58.066 ] 00:17:58.066 } 00:17:58.066 } 00:17:58.066 }' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:58.066 pt2' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.066 [2024-12-13 08:29:10.403375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.066 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e95fcce2-b9ab-4b68-8ed9-dc5108a72653 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z e95fcce2-b9ab-4b68-8ed9-dc5108a72653 ']' 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.325 [2024-12-13 08:29:10.446998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.325 [2024-12-13 08:29:10.447021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.325 [2024-12-13 08:29:10.447119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.325 [2024-12-13 08:29:10.447177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.325 [2024-12-13 08:29:10.447188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.325 [2024-12-13 08:29:10.582814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:58.325 [2024-12-13 08:29:10.584812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:58.325 [2024-12-13 08:29:10.584883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:58.325 [2024-12-13 08:29:10.584939] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:58.325 [2024-12-13 08:29:10.584954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.325 [2024-12-13 08:29:10.584963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:58.325 request: 00:17:58.325 { 00:17:58.325 "name": "raid_bdev1", 00:17:58.325 "raid_level": "raid1", 00:17:58.325 "base_bdevs": [ 00:17:58.325 "malloc1", 00:17:58.325 "malloc2" 00:17:58.325 ], 00:17:58.325 "superblock": false, 00:17:58.325 "method": "bdev_raid_create", 00:17:58.325 "req_id": 1 00:17:58.325 } 00:17:58.325 Got JSON-RPC error response 00:17:58.325 response: 00:17:58.325 { 00:17:58.325 "code": -17, 00:17:58.325 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:58.325 } 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.325 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.326 [2024-12-13 08:29:10.650690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.326 [2024-12-13 08:29:10.650802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.326 [2024-12-13 08:29:10.650840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:58.326 [2024-12-13 08:29:10.650874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.326 [2024-12-13 08:29:10.653227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.326 [2024-12-13 08:29:10.653304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.326 [2024-12-13 08:29:10.653414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:58.326 [2024-12-13 08:29:10.653513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.326 pt1 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.326 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.584 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.585 "name": "raid_bdev1", 00:17:58.585 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:58.585 "strip_size_kb": 0, 00:17:58.585 "state": "configuring", 00:17:58.585 "raid_level": "raid1", 00:17:58.585 "superblock": true, 00:17:58.585 "num_base_bdevs": 2, 00:17:58.585 "num_base_bdevs_discovered": 1, 00:17:58.585 "num_base_bdevs_operational": 2, 00:17:58.585 "base_bdevs_list": [ 00:17:58.585 { 00:17:58.585 "name": "pt1", 00:17:58.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.585 "is_configured": true, 00:17:58.585 "data_offset": 256, 00:17:58.585 "data_size": 7936 00:17:58.585 }, 00:17:58.585 { 00:17:58.585 "name": null, 00:17:58.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.585 "is_configured": false, 00:17:58.585 "data_offset": 256, 00:17:58.585 "data_size": 7936 00:17:58.585 } 00:17:58.585 ] 00:17:58.585 }' 00:17:58.585 08:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.585 08:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.844 [2024-12-13 08:29:11.066001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.844 [2024-12-13 08:29:11.066130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.844 [2024-12-13 08:29:11.066172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:58.844 [2024-12-13 08:29:11.066203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.844 [2024-12-13 08:29:11.066726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.844 [2024-12-13 08:29:11.066798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.844 [2024-12-13 08:29:11.066893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.844 [2024-12-13 08:29:11.066924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.844 [2024-12-13 08:29:11.067058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.844 [2024-12-13 08:29:11.067071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.844 [2024-12-13 08:29:11.067357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:58.844 [2024-12-13 08:29:11.067535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.844 [2024-12-13 08:29:11.067545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:58.844 [2024-12-13 08:29:11.067706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.844 pt2 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.844 "name": "raid_bdev1", 00:17:58.844 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:58.844 "strip_size_kb": 0, 00:17:58.844 "state": "online", 00:17:58.844 "raid_level": "raid1", 00:17:58.844 "superblock": true, 00:17:58.844 "num_base_bdevs": 2, 00:17:58.844 "num_base_bdevs_discovered": 2, 00:17:58.844 "num_base_bdevs_operational": 2, 00:17:58.844 "base_bdevs_list": [ 00:17:58.844 { 00:17:58.844 "name": "pt1", 00:17:58.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.844 "is_configured": true, 00:17:58.844 "data_offset": 256, 00:17:58.844 "data_size": 7936 00:17:58.844 }, 00:17:58.844 { 00:17:58.844 "name": "pt2", 00:17:58.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.844 "is_configured": true, 00:17:58.844 "data_offset": 256, 00:17:58.844 "data_size": 7936 00:17:58.844 } 00:17:58.844 ] 00:17:58.844 }' 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.844 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.411 [2024-12-13 08:29:11.497520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.411 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.411 "name": "raid_bdev1", 00:17:59.411 "aliases": [ 00:17:59.411 "e95fcce2-b9ab-4b68-8ed9-dc5108a72653" 00:17:59.411 ], 00:17:59.411 "product_name": "Raid Volume", 00:17:59.412 "block_size": 4096, 00:17:59.412 "num_blocks": 7936, 00:17:59.412 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:59.412 "assigned_rate_limits": { 00:17:59.412 "rw_ios_per_sec": 0, 00:17:59.412 "rw_mbytes_per_sec": 0, 00:17:59.412 "r_mbytes_per_sec": 0, 00:17:59.412 "w_mbytes_per_sec": 0 00:17:59.412 }, 00:17:59.412 "claimed": false, 00:17:59.412 "zoned": false, 00:17:59.412 "supported_io_types": { 00:17:59.412 "read": true, 00:17:59.412 "write": true, 00:17:59.412 "unmap": false, 00:17:59.412 "flush": false, 00:17:59.412 "reset": true, 00:17:59.412 "nvme_admin": false, 00:17:59.412 "nvme_io": false, 00:17:59.412 "nvme_io_md": false, 00:17:59.412 "write_zeroes": true, 00:17:59.412 "zcopy": false, 00:17:59.412 "get_zone_info": false, 00:17:59.412 "zone_management": false, 00:17:59.412 "zone_append": false, 00:17:59.412 "compare": false, 00:17:59.412 "compare_and_write": false, 00:17:59.412 "abort": false, 00:17:59.412 "seek_hole": false, 00:17:59.412 "seek_data": false, 00:17:59.412 "copy": false, 00:17:59.412 "nvme_iov_md": false 00:17:59.412 }, 00:17:59.412 "memory_domains": [ 00:17:59.412 { 00:17:59.412 "dma_device_id": "system", 00:17:59.412 "dma_device_type": 1 00:17:59.412 }, 00:17:59.412 { 00:17:59.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.412 "dma_device_type": 2 00:17:59.412 }, 00:17:59.412 { 00:17:59.412 "dma_device_id": "system", 00:17:59.412 "dma_device_type": 1 00:17:59.412 }, 00:17:59.412 { 00:17:59.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.412 "dma_device_type": 2 00:17:59.412 } 00:17:59.412 ], 00:17:59.412 "driver_specific": { 00:17:59.412 "raid": { 00:17:59.412 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:59.412 "strip_size_kb": 0, 00:17:59.412 "state": "online", 00:17:59.412 "raid_level": "raid1", 00:17:59.412 "superblock": true, 00:17:59.412 "num_base_bdevs": 2, 00:17:59.412 "num_base_bdevs_discovered": 2, 00:17:59.412 "num_base_bdevs_operational": 2, 00:17:59.412 "base_bdevs_list": [ 00:17:59.412 { 00:17:59.412 "name": "pt1", 00:17:59.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.412 "is_configured": true, 00:17:59.412 "data_offset": 256, 00:17:59.412 "data_size": 7936 00:17:59.412 }, 00:17:59.412 { 00:17:59.412 "name": "pt2", 00:17:59.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.412 "is_configured": true, 00:17:59.412 "data_offset": 256, 00:17:59.412 "data_size": 7936 00:17:59.412 } 00:17:59.412 ] 00:17:59.412 } 00:17:59.412 } 00:17:59.412 }' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.412 pt2' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.412 [2024-12-13 08:29:11.737103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.412 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' e95fcce2-b9ab-4b68-8ed9-dc5108a72653 '!=' e95fcce2-b9ab-4b68-8ed9-dc5108a72653 ']' 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.671 [2024-12-13 08:29:11.784834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.671 "name": "raid_bdev1", 00:17:59.671 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:17:59.671 "strip_size_kb": 0, 00:17:59.671 "state": "online", 00:17:59.671 "raid_level": "raid1", 00:17:59.671 "superblock": true, 00:17:59.671 "num_base_bdevs": 2, 00:17:59.671 "num_base_bdevs_discovered": 1, 00:17:59.671 "num_base_bdevs_operational": 1, 00:17:59.671 "base_bdevs_list": [ 00:17:59.671 { 00:17:59.671 "name": null, 00:17:59.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.671 "is_configured": false, 00:17:59.671 "data_offset": 0, 00:17:59.671 "data_size": 7936 00:17:59.671 }, 00:17:59.671 { 00:17:59.671 "name": "pt2", 00:17:59.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.671 "is_configured": true, 00:17:59.671 "data_offset": 256, 00:17:59.671 "data_size": 7936 00:17:59.671 } 00:17:59.671 ] 00:17:59.671 }' 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.671 08:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 [2024-12-13 08:29:12.224013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.930 [2024-12-13 08:29:12.224111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.930 [2024-12-13 08:29:12.224216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.930 [2024-12-13 08:29:12.224296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.930 [2024-12-13 08:29:12.224332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 [2024-12-13 08:29:12.279880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.930 [2024-12-13 08:29:12.279978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.930 [2024-12-13 08:29:12.280011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:59.930 [2024-12-13 08:29:12.280039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.930 [2024-12-13 08:29:12.282253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.930 [2024-12-13 08:29:12.282353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.930 [2024-12-13 08:29:12.282454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:59.930 [2024-12-13 08:29:12.282519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.930 [2024-12-13 08:29:12.282638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:59.930 [2024-12-13 08:29:12.282681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:59.930 [2024-12-13 08:29:12.282921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:59.930 [2024-12-13 08:29:12.283129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:59.930 [2024-12-13 08:29:12.283171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:59.930 [2024-12-13 08:29:12.283380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.930 pt2 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.930 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.189 "name": "raid_bdev1", 00:18:00.189 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:18:00.189 "strip_size_kb": 0, 00:18:00.189 "state": "online", 00:18:00.189 "raid_level": "raid1", 00:18:00.189 "superblock": true, 00:18:00.189 "num_base_bdevs": 2, 00:18:00.189 "num_base_bdevs_discovered": 1, 00:18:00.189 "num_base_bdevs_operational": 1, 00:18:00.189 "base_bdevs_list": [ 00:18:00.189 { 00:18:00.189 "name": null, 00:18:00.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.189 "is_configured": false, 00:18:00.189 "data_offset": 256, 00:18:00.189 "data_size": 7936 00:18:00.189 }, 00:18:00.189 { 00:18:00.189 "name": "pt2", 00:18:00.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.189 "is_configured": true, 00:18:00.189 "data_offset": 256, 00:18:00.189 "data_size": 7936 00:18:00.189 } 00:18:00.189 ] 00:18:00.189 }' 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.189 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.448 [2024-12-13 08:29:12.727188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.448 [2024-12-13 08:29:12.727220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.448 [2024-12-13 08:29:12.727328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.448 [2024-12-13 08:29:12.727381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.448 [2024-12-13 08:29:12.727390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.448 [2024-12-13 08:29:12.771136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.448 [2024-12-13 08:29:12.771236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.448 [2024-12-13 08:29:12.771286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:00.448 [2024-12-13 08:29:12.771319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.448 [2024-12-13 08:29:12.773626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.448 [2024-12-13 08:29:12.773699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.448 [2024-12-13 08:29:12.773812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.448 [2024-12-13 08:29:12.773899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.448 [2024-12-13 08:29:12.774162] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:00.448 [2024-12-13 08:29:12.774227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.448 [2024-12-13 08:29:12.774267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:00.448 [2024-12-13 08:29:12.774386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.448 [2024-12-13 08:29:12.774511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:00.448 [2024-12-13 08:29:12.774554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.448 [2024-12-13 08:29:12.774849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:00.448 [2024-12-13 08:29:12.775048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:00.448 [2024-12-13 08:29:12.775117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:00.448 [2024-12-13 08:29:12.775377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.448 pt1 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.448 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.449 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.449 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.449 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.449 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.707 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.707 "name": "raid_bdev1", 00:18:00.707 "uuid": "e95fcce2-b9ab-4b68-8ed9-dc5108a72653", 00:18:00.707 "strip_size_kb": 0, 00:18:00.707 "state": "online", 00:18:00.707 "raid_level": "raid1", 00:18:00.707 "superblock": true, 00:18:00.707 "num_base_bdevs": 2, 00:18:00.707 "num_base_bdevs_discovered": 1, 00:18:00.707 "num_base_bdevs_operational": 1, 00:18:00.707 "base_bdevs_list": [ 00:18:00.707 { 00:18:00.707 "name": null, 00:18:00.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.707 "is_configured": false, 00:18:00.707 "data_offset": 256, 00:18:00.707 "data_size": 7936 00:18:00.707 }, 00:18:00.707 { 00:18:00.707 "name": "pt2", 00:18:00.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.707 "is_configured": true, 00:18:00.707 "data_offset": 256, 00:18:00.707 "data_size": 7936 00:18:00.707 } 00:18:00.707 ] 00:18:00.707 }' 00:18:00.707 08:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.707 08:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.966 [2024-12-13 08:29:13.266683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' e95fcce2-b9ab-4b68-8ed9-dc5108a72653 '!=' e95fcce2-b9ab-4b68-8ed9-dc5108a72653 ']' 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86346 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86346 ']' 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86346 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.966 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86346 00:18:01.225 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.225 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.225 killing process with pid 86346 00:18:01.225 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86346' 00:18:01.225 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86346 00:18:01.225 [2024-12-13 08:29:13.339572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.225 [2024-12-13 08:29:13.339672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.225 [2024-12-13 08:29:13.339724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.225 [2024-12-13 08:29:13.339739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:01.225 08:29:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86346 00:18:01.225 [2024-12-13 08:29:13.546822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.603 08:29:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:02.603 00:18:02.603 real 0m5.987s 00:18:02.603 user 0m9.076s 00:18:02.603 sys 0m1.036s 00:18:02.603 08:29:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.603 08:29:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.603 ************************************ 00:18:02.603 END TEST raid_superblock_test_4k 00:18:02.603 ************************************ 00:18:02.603 08:29:14 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:02.603 08:29:14 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:02.603 08:29:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:02.603 08:29:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.603 08:29:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.603 ************************************ 00:18:02.603 START TEST raid_rebuild_test_sb_4k 00:18:02.603 ************************************ 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86669 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86669 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86669 ']' 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.603 08:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.603 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:02.603 Zero copy mechanism will not be used. 00:18:02.603 [2024-12-13 08:29:14.821070] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:02.603 [2024-12-13 08:29:14.821279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86669 ] 00:18:02.862 [2024-12-13 08:29:14.995911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.862 [2024-12-13 08:29:15.113869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.122 [2024-12-13 08:29:15.318316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.122 [2024-12-13 08:29:15.318411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.381 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.381 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:03.381 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.382 BaseBdev1_malloc 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.382 [2024-12-13 08:29:15.693162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.382 [2024-12-13 08:29:15.693221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.382 [2024-12-13 08:29:15.693242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:03.382 [2024-12-13 08:29:15.693253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.382 [2024-12-13 08:29:15.695249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.382 [2024-12-13 08:29:15.695313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.382 BaseBdev1 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.382 BaseBdev2_malloc 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.382 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 [2024-12-13 08:29:15.747488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:03.642 [2024-12-13 08:29:15.747550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.642 [2024-12-13 08:29:15.747570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:03.642 [2024-12-13 08:29:15.747582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.642 [2024-12-13 08:29:15.749647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.642 [2024-12-13 08:29:15.749689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:03.642 BaseBdev2 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 spare_malloc 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 spare_delay 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 [2024-12-13 08:29:15.826984] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.642 [2024-12-13 08:29:15.827047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.642 [2024-12-13 08:29:15.827066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:03.642 [2024-12-13 08:29:15.827077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.642 [2024-12-13 08:29:15.829203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.642 [2024-12-13 08:29:15.829280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.642 spare 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 [2024-12-13 08:29:15.839016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.642 [2024-12-13 08:29:15.840815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.642 [2024-12-13 08:29:15.841053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:03.642 [2024-12-13 08:29:15.841126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.642 [2024-12-13 08:29:15.841373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:03.642 [2024-12-13 08:29:15.841582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:03.642 [2024-12-13 08:29:15.841625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:03.642 [2024-12-13 08:29:15.841801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.642 "name": "raid_bdev1", 00:18:03.642 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:03.642 "strip_size_kb": 0, 00:18:03.642 "state": "online", 00:18:03.642 "raid_level": "raid1", 00:18:03.642 "superblock": true, 00:18:03.642 "num_base_bdevs": 2, 00:18:03.642 "num_base_bdevs_discovered": 2, 00:18:03.642 "num_base_bdevs_operational": 2, 00:18:03.642 "base_bdevs_list": [ 00:18:03.642 { 00:18:03.642 "name": "BaseBdev1", 00:18:03.642 "uuid": "1874a117-1dfd-5f06-a9a4-bbd6c50334cc", 00:18:03.642 "is_configured": true, 00:18:03.642 "data_offset": 256, 00:18:03.642 "data_size": 7936 00:18:03.642 }, 00:18:03.642 { 00:18:03.642 "name": "BaseBdev2", 00:18:03.642 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:03.642 "is_configured": true, 00:18:03.642 "data_offset": 256, 00:18:03.642 "data_size": 7936 00:18:03.642 } 00:18:03.642 ] 00:18:03.642 }' 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.642 08:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.902 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.902 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:03.902 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.902 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.162 [2024-12-13 08:29:16.266574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.162 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:04.421 [2024-12-13 08:29:16.533934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:04.421 /dev/nbd0 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.421 1+0 records in 00:18:04.421 1+0 records out 00:18:04.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341259 s, 12.0 MB/s 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.421 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:04.422 08:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:04.990 7936+0 records in 00:18:04.990 7936+0 records out 00:18:04.990 32505856 bytes (33 MB, 31 MiB) copied, 0.593247 s, 54.8 MB/s 00:18:04.990 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:04.990 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.990 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:04.990 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.990 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:04.990 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.990 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.250 [2024-12-13 08:29:17.410194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.250 [2024-12-13 08:29:17.427450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.250 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.250 "name": "raid_bdev1", 00:18:05.250 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:05.250 "strip_size_kb": 0, 00:18:05.250 "state": "online", 00:18:05.250 "raid_level": "raid1", 00:18:05.250 "superblock": true, 00:18:05.250 "num_base_bdevs": 2, 00:18:05.250 "num_base_bdevs_discovered": 1, 00:18:05.250 "num_base_bdevs_operational": 1, 00:18:05.250 "base_bdevs_list": [ 00:18:05.250 { 00:18:05.251 "name": null, 00:18:05.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.251 "is_configured": false, 00:18:05.251 "data_offset": 0, 00:18:05.251 "data_size": 7936 00:18:05.251 }, 00:18:05.251 { 00:18:05.251 "name": "BaseBdev2", 00:18:05.251 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:05.251 "is_configured": true, 00:18:05.251 "data_offset": 256, 00:18:05.251 "data_size": 7936 00:18:05.251 } 00:18:05.251 ] 00:18:05.251 }' 00:18:05.251 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.251 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.510 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.510 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.510 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.769 [2024-12-13 08:29:17.874720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.769 [2024-12-13 08:29:17.892180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:05.769 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.769 08:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:05.769 [2024-12-13 08:29:17.894029] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.707 "name": "raid_bdev1", 00:18:06.707 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:06.707 "strip_size_kb": 0, 00:18:06.707 "state": "online", 00:18:06.707 "raid_level": "raid1", 00:18:06.707 "superblock": true, 00:18:06.707 "num_base_bdevs": 2, 00:18:06.707 "num_base_bdevs_discovered": 2, 00:18:06.707 "num_base_bdevs_operational": 2, 00:18:06.707 "process": { 00:18:06.707 "type": "rebuild", 00:18:06.707 "target": "spare", 00:18:06.707 "progress": { 00:18:06.707 "blocks": 2560, 00:18:06.707 "percent": 32 00:18:06.707 } 00:18:06.707 }, 00:18:06.707 "base_bdevs_list": [ 00:18:06.707 { 00:18:06.707 "name": "spare", 00:18:06.707 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:06.707 "is_configured": true, 00:18:06.707 "data_offset": 256, 00:18:06.707 "data_size": 7936 00:18:06.707 }, 00:18:06.707 { 00:18:06.707 "name": "BaseBdev2", 00:18:06.707 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:06.707 "is_configured": true, 00:18:06.707 "data_offset": 256, 00:18:06.707 "data_size": 7936 00:18:06.707 } 00:18:06.707 ] 00:18:06.707 }' 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.707 08:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.707 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.707 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:06.707 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.707 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.707 [2024-12-13 08:29:19.037706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.967 [2024-12-13 08:29:19.099303] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.967 [2024-12-13 08:29:19.099372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.967 [2024-12-13 08:29:19.099386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.967 [2024-12-13 08:29:19.099396] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.967 "name": "raid_bdev1", 00:18:06.967 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:06.967 "strip_size_kb": 0, 00:18:06.967 "state": "online", 00:18:06.967 "raid_level": "raid1", 00:18:06.967 "superblock": true, 00:18:06.967 "num_base_bdevs": 2, 00:18:06.967 "num_base_bdevs_discovered": 1, 00:18:06.967 "num_base_bdevs_operational": 1, 00:18:06.967 "base_bdevs_list": [ 00:18:06.967 { 00:18:06.967 "name": null, 00:18:06.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.967 "is_configured": false, 00:18:06.967 "data_offset": 0, 00:18:06.967 "data_size": 7936 00:18:06.967 }, 00:18:06.967 { 00:18:06.967 "name": "BaseBdev2", 00:18:06.967 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:06.967 "is_configured": true, 00:18:06.967 "data_offset": 256, 00:18:06.967 "data_size": 7936 00:18:06.967 } 00:18:06.967 ] 00:18:06.967 }' 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.967 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.226 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.226 "name": "raid_bdev1", 00:18:07.226 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:07.226 "strip_size_kb": 0, 00:18:07.226 "state": "online", 00:18:07.226 "raid_level": "raid1", 00:18:07.226 "superblock": true, 00:18:07.226 "num_base_bdevs": 2, 00:18:07.226 "num_base_bdevs_discovered": 1, 00:18:07.226 "num_base_bdevs_operational": 1, 00:18:07.226 "base_bdevs_list": [ 00:18:07.226 { 00:18:07.226 "name": null, 00:18:07.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.226 "is_configured": false, 00:18:07.226 "data_offset": 0, 00:18:07.226 "data_size": 7936 00:18:07.226 }, 00:18:07.226 { 00:18:07.226 "name": "BaseBdev2", 00:18:07.226 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:07.226 "is_configured": true, 00:18:07.226 "data_offset": 256, 00:18:07.226 "data_size": 7936 00:18:07.226 } 00:18:07.226 ] 00:18:07.226 }' 00:18:07.485 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.485 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.485 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.485 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.486 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.486 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.486 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.486 [2024-12-13 08:29:19.693561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.486 [2024-12-13 08:29:19.709834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:07.486 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.486 08:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:07.486 [2024-12-13 08:29:19.711721] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.423 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.424 "name": "raid_bdev1", 00:18:08.424 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:08.424 "strip_size_kb": 0, 00:18:08.424 "state": "online", 00:18:08.424 "raid_level": "raid1", 00:18:08.424 "superblock": true, 00:18:08.424 "num_base_bdevs": 2, 00:18:08.424 "num_base_bdevs_discovered": 2, 00:18:08.424 "num_base_bdevs_operational": 2, 00:18:08.424 "process": { 00:18:08.424 "type": "rebuild", 00:18:08.424 "target": "spare", 00:18:08.424 "progress": { 00:18:08.424 "blocks": 2560, 00:18:08.424 "percent": 32 00:18:08.424 } 00:18:08.424 }, 00:18:08.424 "base_bdevs_list": [ 00:18:08.424 { 00:18:08.424 "name": "spare", 00:18:08.424 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:08.424 "is_configured": true, 00:18:08.424 "data_offset": 256, 00:18:08.424 "data_size": 7936 00:18:08.424 }, 00:18:08.424 { 00:18:08.424 "name": "BaseBdev2", 00:18:08.424 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:08.424 "is_configured": true, 00:18:08.424 "data_offset": 256, 00:18:08.424 "data_size": 7936 00:18:08.424 } 00:18:08.424 ] 00:18:08.424 }' 00:18:08.424 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.683 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.683 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.683 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.683 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:08.684 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=681 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.684 "name": "raid_bdev1", 00:18:08.684 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:08.684 "strip_size_kb": 0, 00:18:08.684 "state": "online", 00:18:08.684 "raid_level": "raid1", 00:18:08.684 "superblock": true, 00:18:08.684 "num_base_bdevs": 2, 00:18:08.684 "num_base_bdevs_discovered": 2, 00:18:08.684 "num_base_bdevs_operational": 2, 00:18:08.684 "process": { 00:18:08.684 "type": "rebuild", 00:18:08.684 "target": "spare", 00:18:08.684 "progress": { 00:18:08.684 "blocks": 2816, 00:18:08.684 "percent": 35 00:18:08.684 } 00:18:08.684 }, 00:18:08.684 "base_bdevs_list": [ 00:18:08.684 { 00:18:08.684 "name": "spare", 00:18:08.684 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:08.684 "is_configured": true, 00:18:08.684 "data_offset": 256, 00:18:08.684 "data_size": 7936 00:18:08.684 }, 00:18:08.684 { 00:18:08.684 "name": "BaseBdev2", 00:18:08.684 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:08.684 "is_configured": true, 00:18:08.684 "data_offset": 256, 00:18:08.684 "data_size": 7936 00:18:08.684 } 00:18:08.684 ] 00:18:08.684 }' 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.684 08:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.684 08:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.684 08:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.063 "name": "raid_bdev1", 00:18:10.063 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:10.063 "strip_size_kb": 0, 00:18:10.063 "state": "online", 00:18:10.063 "raid_level": "raid1", 00:18:10.063 "superblock": true, 00:18:10.063 "num_base_bdevs": 2, 00:18:10.063 "num_base_bdevs_discovered": 2, 00:18:10.063 "num_base_bdevs_operational": 2, 00:18:10.063 "process": { 00:18:10.063 "type": "rebuild", 00:18:10.063 "target": "spare", 00:18:10.063 "progress": { 00:18:10.063 "blocks": 5888, 00:18:10.063 "percent": 74 00:18:10.063 } 00:18:10.063 }, 00:18:10.063 "base_bdevs_list": [ 00:18:10.063 { 00:18:10.063 "name": "spare", 00:18:10.063 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:10.063 "is_configured": true, 00:18:10.063 "data_offset": 256, 00:18:10.063 "data_size": 7936 00:18:10.063 }, 00:18:10.063 { 00:18:10.063 "name": "BaseBdev2", 00:18:10.063 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:10.063 "is_configured": true, 00:18:10.063 "data_offset": 256, 00:18:10.063 "data_size": 7936 00:18:10.063 } 00:18:10.063 ] 00:18:10.063 }' 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.063 08:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.632 [2024-12-13 08:29:22.824590] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:10.632 [2024-12-13 08:29:22.824672] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:10.632 [2024-12-13 08:29:22.824777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.891 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.891 "name": "raid_bdev1", 00:18:10.891 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:10.891 "strip_size_kb": 0, 00:18:10.891 "state": "online", 00:18:10.891 "raid_level": "raid1", 00:18:10.891 "superblock": true, 00:18:10.891 "num_base_bdevs": 2, 00:18:10.891 "num_base_bdevs_discovered": 2, 00:18:10.891 "num_base_bdevs_operational": 2, 00:18:10.891 "base_bdevs_list": [ 00:18:10.891 { 00:18:10.892 "name": "spare", 00:18:10.892 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:10.892 "is_configured": true, 00:18:10.892 "data_offset": 256, 00:18:10.892 "data_size": 7936 00:18:10.892 }, 00:18:10.892 { 00:18:10.892 "name": "BaseBdev2", 00:18:10.892 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:10.892 "is_configured": true, 00:18:10.892 "data_offset": 256, 00:18:10.892 "data_size": 7936 00:18:10.892 } 00:18:10.892 ] 00:18:10.892 }' 00:18:10.892 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.151 "name": "raid_bdev1", 00:18:11.151 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:11.151 "strip_size_kb": 0, 00:18:11.151 "state": "online", 00:18:11.151 "raid_level": "raid1", 00:18:11.151 "superblock": true, 00:18:11.151 "num_base_bdevs": 2, 00:18:11.151 "num_base_bdevs_discovered": 2, 00:18:11.151 "num_base_bdevs_operational": 2, 00:18:11.151 "base_bdevs_list": [ 00:18:11.151 { 00:18:11.151 "name": "spare", 00:18:11.151 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:11.151 "is_configured": true, 00:18:11.151 "data_offset": 256, 00:18:11.151 "data_size": 7936 00:18:11.151 }, 00:18:11.151 { 00:18:11.151 "name": "BaseBdev2", 00:18:11.151 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:11.151 "is_configured": true, 00:18:11.151 "data_offset": 256, 00:18:11.151 "data_size": 7936 00:18:11.151 } 00:18:11.151 ] 00:18:11.151 }' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.151 "name": "raid_bdev1", 00:18:11.151 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:11.151 "strip_size_kb": 0, 00:18:11.151 "state": "online", 00:18:11.151 "raid_level": "raid1", 00:18:11.151 "superblock": true, 00:18:11.151 "num_base_bdevs": 2, 00:18:11.151 "num_base_bdevs_discovered": 2, 00:18:11.151 "num_base_bdevs_operational": 2, 00:18:11.151 "base_bdevs_list": [ 00:18:11.151 { 00:18:11.151 "name": "spare", 00:18:11.151 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:11.151 "is_configured": true, 00:18:11.151 "data_offset": 256, 00:18:11.151 "data_size": 7936 00:18:11.151 }, 00:18:11.151 { 00:18:11.151 "name": "BaseBdev2", 00:18:11.151 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:11.151 "is_configured": true, 00:18:11.151 "data_offset": 256, 00:18:11.151 "data_size": 7936 00:18:11.151 } 00:18:11.151 ] 00:18:11.151 }' 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.151 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.719 [2024-12-13 08:29:23.903662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.719 [2024-12-13 08:29:23.903756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.719 [2024-12-13 08:29:23.903841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.719 [2024-12-13 08:29:23.903907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.719 [2024-12-13 08:29:23.903919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.719 08:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:11.978 /dev/nbd0 00:18:11.978 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:11.978 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.979 1+0 records in 00:18:11.979 1+0 records out 00:18:11.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355397 s, 11.5 MB/s 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.979 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:12.238 /dev/nbd1 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:12.238 1+0 records in 00:18:12.238 1+0 records out 00:18:12.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045155 s, 9.1 MB/s 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:12.238 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.497 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:12.864 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:12.864 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.864 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.864 08:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.864 [2024-12-13 08:29:25.099323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.864 [2024-12-13 08:29:25.099385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.864 [2024-12-13 08:29:25.099423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:12.864 [2024-12-13 08:29:25.099434] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.864 [2024-12-13 08:29:25.101826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.864 [2024-12-13 08:29:25.101865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.864 [2024-12-13 08:29:25.101960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:12.864 [2024-12-13 08:29:25.102016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.864 [2024-12-13 08:29:25.102227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.864 spare 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.864 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.144 [2024-12-13 08:29:25.202161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:13.144 [2024-12-13 08:29:25.202215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.144 [2024-12-13 08:29:25.202536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:13.144 [2024-12-13 08:29:25.202758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:13.144 [2024-12-13 08:29:25.202777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:13.144 [2024-12-13 08:29:25.202976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.144 "name": "raid_bdev1", 00:18:13.144 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:13.144 "strip_size_kb": 0, 00:18:13.144 "state": "online", 00:18:13.144 "raid_level": "raid1", 00:18:13.144 "superblock": true, 00:18:13.144 "num_base_bdevs": 2, 00:18:13.144 "num_base_bdevs_discovered": 2, 00:18:13.144 "num_base_bdevs_operational": 2, 00:18:13.144 "base_bdevs_list": [ 00:18:13.144 { 00:18:13.144 "name": "spare", 00:18:13.144 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:13.144 "is_configured": true, 00:18:13.144 "data_offset": 256, 00:18:13.144 "data_size": 7936 00:18:13.144 }, 00:18:13.144 { 00:18:13.144 "name": "BaseBdev2", 00:18:13.144 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:13.144 "is_configured": true, 00:18:13.144 "data_offset": 256, 00:18:13.144 "data_size": 7936 00:18:13.144 } 00:18:13.144 ] 00:18:13.144 }' 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.144 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.403 "name": "raid_bdev1", 00:18:13.403 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:13.403 "strip_size_kb": 0, 00:18:13.403 "state": "online", 00:18:13.403 "raid_level": "raid1", 00:18:13.403 "superblock": true, 00:18:13.403 "num_base_bdevs": 2, 00:18:13.403 "num_base_bdevs_discovered": 2, 00:18:13.403 "num_base_bdevs_operational": 2, 00:18:13.403 "base_bdevs_list": [ 00:18:13.403 { 00:18:13.403 "name": "spare", 00:18:13.403 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:13.403 "is_configured": true, 00:18:13.403 "data_offset": 256, 00:18:13.403 "data_size": 7936 00:18:13.403 }, 00:18:13.403 { 00:18:13.403 "name": "BaseBdev2", 00:18:13.403 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:13.403 "is_configured": true, 00:18:13.403 "data_offset": 256, 00:18:13.403 "data_size": 7936 00:18:13.403 } 00:18:13.403 ] 00:18:13.403 }' 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.403 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.663 [2024-12-13 08:29:25.826144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.663 "name": "raid_bdev1", 00:18:13.663 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:13.663 "strip_size_kb": 0, 00:18:13.663 "state": "online", 00:18:13.663 "raid_level": "raid1", 00:18:13.663 "superblock": true, 00:18:13.663 "num_base_bdevs": 2, 00:18:13.663 "num_base_bdevs_discovered": 1, 00:18:13.663 "num_base_bdevs_operational": 1, 00:18:13.663 "base_bdevs_list": [ 00:18:13.663 { 00:18:13.663 "name": null, 00:18:13.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.663 "is_configured": false, 00:18:13.663 "data_offset": 0, 00:18:13.663 "data_size": 7936 00:18:13.663 }, 00:18:13.663 { 00:18:13.663 "name": "BaseBdev2", 00:18:13.663 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:13.663 "is_configured": true, 00:18:13.663 "data_offset": 256, 00:18:13.663 "data_size": 7936 00:18:13.663 } 00:18:13.663 ] 00:18:13.663 }' 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.663 08:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 08:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.231 08:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.231 08:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 [2024-12-13 08:29:26.293359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.231 [2024-12-13 08:29:26.293583] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:14.231 [2024-12-13 08:29:26.293607] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:14.231 [2024-12-13 08:29:26.293640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.231 [2024-12-13 08:29:26.309602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:14.231 08:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.231 08:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:14.231 [2024-12-13 08:29:26.311469] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.168 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.169 "name": "raid_bdev1", 00:18:15.169 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:15.169 "strip_size_kb": 0, 00:18:15.169 "state": "online", 00:18:15.169 "raid_level": "raid1", 00:18:15.169 "superblock": true, 00:18:15.169 "num_base_bdevs": 2, 00:18:15.169 "num_base_bdevs_discovered": 2, 00:18:15.169 "num_base_bdevs_operational": 2, 00:18:15.169 "process": { 00:18:15.169 "type": "rebuild", 00:18:15.169 "target": "spare", 00:18:15.169 "progress": { 00:18:15.169 "blocks": 2560, 00:18:15.169 "percent": 32 00:18:15.169 } 00:18:15.169 }, 00:18:15.169 "base_bdevs_list": [ 00:18:15.169 { 00:18:15.169 "name": "spare", 00:18:15.169 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:15.169 "is_configured": true, 00:18:15.169 "data_offset": 256, 00:18:15.169 "data_size": 7936 00:18:15.169 }, 00:18:15.169 { 00:18:15.169 "name": "BaseBdev2", 00:18:15.169 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:15.169 "is_configured": true, 00:18:15.169 "data_offset": 256, 00:18:15.169 "data_size": 7936 00:18:15.169 } 00:18:15.169 ] 00:18:15.169 }' 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.169 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.169 [2024-12-13 08:29:27.458915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.169 [2024-12-13 08:29:27.516859] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:15.169 [2024-12-13 08:29:27.516931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.169 [2024-12-13 08:29:27.516953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.169 [2024-12-13 08:29:27.516961] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.431 "name": "raid_bdev1", 00:18:15.431 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:15.431 "strip_size_kb": 0, 00:18:15.431 "state": "online", 00:18:15.431 "raid_level": "raid1", 00:18:15.431 "superblock": true, 00:18:15.431 "num_base_bdevs": 2, 00:18:15.431 "num_base_bdevs_discovered": 1, 00:18:15.431 "num_base_bdevs_operational": 1, 00:18:15.431 "base_bdevs_list": [ 00:18:15.431 { 00:18:15.431 "name": null, 00:18:15.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.431 "is_configured": false, 00:18:15.431 "data_offset": 0, 00:18:15.431 "data_size": 7936 00:18:15.431 }, 00:18:15.431 { 00:18:15.431 "name": "BaseBdev2", 00:18:15.431 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:15.431 "is_configured": true, 00:18:15.431 "data_offset": 256, 00:18:15.431 "data_size": 7936 00:18:15.431 } 00:18:15.431 ] 00:18:15.431 }' 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.431 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.690 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:15.690 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.690 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.690 [2024-12-13 08:29:27.951255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:15.690 [2024-12-13 08:29:27.951337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.690 [2024-12-13 08:29:27.951359] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:15.690 [2024-12-13 08:29:27.951370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.690 [2024-12-13 08:29:27.951871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.690 [2024-12-13 08:29:27.951902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:15.690 [2024-12-13 08:29:27.952000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:15.690 [2024-12-13 08:29:27.952023] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.690 [2024-12-13 08:29:27.952036] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.690 [2024-12-13 08:29:27.952059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.690 [2024-12-13 08:29:27.968715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:15.690 spare 00:18:15.690 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.690 08:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:15.690 [2024-12-13 08:29:27.970583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.626 08:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.885 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.885 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.885 "name": "raid_bdev1", 00:18:16.885 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:16.885 "strip_size_kb": 0, 00:18:16.885 "state": "online", 00:18:16.885 "raid_level": "raid1", 00:18:16.885 "superblock": true, 00:18:16.885 "num_base_bdevs": 2, 00:18:16.885 "num_base_bdevs_discovered": 2, 00:18:16.885 "num_base_bdevs_operational": 2, 00:18:16.885 "process": { 00:18:16.885 "type": "rebuild", 00:18:16.885 "target": "spare", 00:18:16.885 "progress": { 00:18:16.885 "blocks": 2560, 00:18:16.885 "percent": 32 00:18:16.885 } 00:18:16.885 }, 00:18:16.885 "base_bdevs_list": [ 00:18:16.885 { 00:18:16.885 "name": "spare", 00:18:16.885 "uuid": "f221fb7b-4ec2-5271-807d-382738eb63fc", 00:18:16.885 "is_configured": true, 00:18:16.885 "data_offset": 256, 00:18:16.885 "data_size": 7936 00:18:16.885 }, 00:18:16.885 { 00:18:16.885 "name": "BaseBdev2", 00:18:16.885 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:16.885 "is_configured": true, 00:18:16.886 "data_offset": 256, 00:18:16.886 "data_size": 7936 00:18:16.886 } 00:18:16.886 ] 00:18:16.886 }' 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.886 [2024-12-13 08:29:29.130629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.886 [2024-12-13 08:29:29.175668] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.886 [2024-12-13 08:29:29.175733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.886 [2024-12-13 08:29:29.175749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.886 [2024-12-13 08:29:29.175757] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.886 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.145 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.145 "name": "raid_bdev1", 00:18:17.145 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:17.145 "strip_size_kb": 0, 00:18:17.145 "state": "online", 00:18:17.145 "raid_level": "raid1", 00:18:17.145 "superblock": true, 00:18:17.145 "num_base_bdevs": 2, 00:18:17.145 "num_base_bdevs_discovered": 1, 00:18:17.145 "num_base_bdevs_operational": 1, 00:18:17.145 "base_bdevs_list": [ 00:18:17.145 { 00:18:17.145 "name": null, 00:18:17.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.145 "is_configured": false, 00:18:17.145 "data_offset": 0, 00:18:17.145 "data_size": 7936 00:18:17.145 }, 00:18:17.145 { 00:18:17.145 "name": "BaseBdev2", 00:18:17.145 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:17.145 "is_configured": true, 00:18:17.145 "data_offset": 256, 00:18:17.145 "data_size": 7936 00:18:17.145 } 00:18:17.145 ] 00:18:17.145 }' 00:18:17.145 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.145 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.404 "name": "raid_bdev1", 00:18:17.404 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:17.404 "strip_size_kb": 0, 00:18:17.404 "state": "online", 00:18:17.404 "raid_level": "raid1", 00:18:17.404 "superblock": true, 00:18:17.404 "num_base_bdevs": 2, 00:18:17.404 "num_base_bdevs_discovered": 1, 00:18:17.404 "num_base_bdevs_operational": 1, 00:18:17.404 "base_bdevs_list": [ 00:18:17.404 { 00:18:17.404 "name": null, 00:18:17.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.404 "is_configured": false, 00:18:17.404 "data_offset": 0, 00:18:17.404 "data_size": 7936 00:18:17.404 }, 00:18:17.404 { 00:18:17.404 "name": "BaseBdev2", 00:18:17.404 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:17.404 "is_configured": true, 00:18:17.404 "data_offset": 256, 00:18:17.404 "data_size": 7936 00:18:17.404 } 00:18:17.404 ] 00:18:17.404 }' 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.404 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.404 [2024-12-13 08:29:29.733990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.404 [2024-12-13 08:29:29.734053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.405 [2024-12-13 08:29:29.734073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:17.405 [2024-12-13 08:29:29.734092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.405 [2024-12-13 08:29:29.734563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.405 [2024-12-13 08:29:29.734591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.405 [2024-12-13 08:29:29.734671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:17.405 [2024-12-13 08:29:29.734694] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.405 [2024-12-13 08:29:29.734706] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:17.405 [2024-12-13 08:29:29.734717] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:17.405 BaseBdev1 00:18:17.405 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.405 08:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:18.782 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.782 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.782 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.782 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.783 "name": "raid_bdev1", 00:18:18.783 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:18.783 "strip_size_kb": 0, 00:18:18.783 "state": "online", 00:18:18.783 "raid_level": "raid1", 00:18:18.783 "superblock": true, 00:18:18.783 "num_base_bdevs": 2, 00:18:18.783 "num_base_bdevs_discovered": 1, 00:18:18.783 "num_base_bdevs_operational": 1, 00:18:18.783 "base_bdevs_list": [ 00:18:18.783 { 00:18:18.783 "name": null, 00:18:18.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.783 "is_configured": false, 00:18:18.783 "data_offset": 0, 00:18:18.783 "data_size": 7936 00:18:18.783 }, 00:18:18.783 { 00:18:18.783 "name": "BaseBdev2", 00:18:18.783 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:18.783 "is_configured": true, 00:18:18.783 "data_offset": 256, 00:18:18.783 "data_size": 7936 00:18:18.783 } 00:18:18.783 ] 00:18:18.783 }' 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.783 08:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.041 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.041 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.041 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.041 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.041 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.041 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.042 "name": "raid_bdev1", 00:18:19.042 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:19.042 "strip_size_kb": 0, 00:18:19.042 "state": "online", 00:18:19.042 "raid_level": "raid1", 00:18:19.042 "superblock": true, 00:18:19.042 "num_base_bdevs": 2, 00:18:19.042 "num_base_bdevs_discovered": 1, 00:18:19.042 "num_base_bdevs_operational": 1, 00:18:19.042 "base_bdevs_list": [ 00:18:19.042 { 00:18:19.042 "name": null, 00:18:19.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.042 "is_configured": false, 00:18:19.042 "data_offset": 0, 00:18:19.042 "data_size": 7936 00:18:19.042 }, 00:18:19.042 { 00:18:19.042 "name": "BaseBdev2", 00:18:19.042 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:19.042 "is_configured": true, 00:18:19.042 "data_offset": 256, 00:18:19.042 "data_size": 7936 00:18:19.042 } 00:18:19.042 ] 00:18:19.042 }' 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.042 [2024-12-13 08:29:31.387288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.042 [2024-12-13 08:29:31.387480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.042 [2024-12-13 08:29:31.387506] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:19.042 request: 00:18:19.042 { 00:18:19.042 "base_bdev": "BaseBdev1", 00:18:19.042 "raid_bdev": "raid_bdev1", 00:18:19.042 "method": "bdev_raid_add_base_bdev", 00:18:19.042 "req_id": 1 00:18:19.042 } 00:18:19.042 Got JSON-RPC error response 00:18:19.042 response: 00:18:19.042 { 00:18:19.042 "code": -22, 00:18:19.042 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:19.042 } 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.042 08:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.419 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.419 "name": "raid_bdev1", 00:18:20.419 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:20.419 "strip_size_kb": 0, 00:18:20.420 "state": "online", 00:18:20.420 "raid_level": "raid1", 00:18:20.420 "superblock": true, 00:18:20.420 "num_base_bdevs": 2, 00:18:20.420 "num_base_bdevs_discovered": 1, 00:18:20.420 "num_base_bdevs_operational": 1, 00:18:20.420 "base_bdevs_list": [ 00:18:20.420 { 00:18:20.420 "name": null, 00:18:20.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.420 "is_configured": false, 00:18:20.420 "data_offset": 0, 00:18:20.420 "data_size": 7936 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "name": "BaseBdev2", 00:18:20.420 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:20.420 "is_configured": true, 00:18:20.420 "data_offset": 256, 00:18:20.420 "data_size": 7936 00:18:20.420 } 00:18:20.420 ] 00:18:20.420 }' 00:18:20.420 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.420 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.679 "name": "raid_bdev1", 00:18:20.679 "uuid": "9a4b04fd-d19a-4240-8ea0-9ea1c5287f1a", 00:18:20.679 "strip_size_kb": 0, 00:18:20.679 "state": "online", 00:18:20.679 "raid_level": "raid1", 00:18:20.679 "superblock": true, 00:18:20.679 "num_base_bdevs": 2, 00:18:20.679 "num_base_bdevs_discovered": 1, 00:18:20.679 "num_base_bdevs_operational": 1, 00:18:20.679 "base_bdevs_list": [ 00:18:20.679 { 00:18:20.679 "name": null, 00:18:20.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.679 "is_configured": false, 00:18:20.679 "data_offset": 0, 00:18:20.679 "data_size": 7936 00:18:20.679 }, 00:18:20.679 { 00:18:20.679 "name": "BaseBdev2", 00:18:20.679 "uuid": "b94984d7-4cb7-5c3b-8ad7-53b84a513dff", 00:18:20.679 "is_configured": true, 00:18:20.679 "data_offset": 256, 00:18:20.679 "data_size": 7936 00:18:20.679 } 00:18:20.679 ] 00:18:20.679 }' 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86669 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86669 ']' 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86669 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86669 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.679 killing process with pid 86669 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86669' 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86669 00:18:20.679 Received shutdown signal, test time was about 60.000000 seconds 00:18:20.679 00:18:20.679 Latency(us) 00:18:20.679 [2024-12-13T08:29:33.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.679 [2024-12-13T08:29:33.044Z] =================================================================================================================== 00:18:20.679 [2024-12-13T08:29:33.044Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.679 [2024-12-13 08:29:32.987030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.679 [2024-12-13 08:29:32.987172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.679 [2024-12-13 08:29:32.987229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.679 [2024-12-13 08:29:32.987241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:20.679 08:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86669 00:18:20.938 [2024-12-13 08:29:33.284799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.317 08:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:22.317 00:18:22.317 real 0m19.641s 00:18:22.317 user 0m25.653s 00:18:22.317 sys 0m2.519s 00:18:22.317 08:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.317 08:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.317 ************************************ 00:18:22.317 END TEST raid_rebuild_test_sb_4k 00:18:22.317 ************************************ 00:18:22.317 08:29:34 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:22.317 08:29:34 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:22.317 08:29:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:22.317 08:29:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.317 08:29:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.317 ************************************ 00:18:22.317 START TEST raid_state_function_test_sb_md_separate 00:18:22.317 ************************************ 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87356 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:22.317 Process raid pid: 87356 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87356' 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87356 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87356 ']' 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.317 08:29:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.317 [2024-12-13 08:29:34.557443] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:22.317 [2024-12-13 08:29:34.557565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.576 [2024-12-13 08:29:34.712795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.576 [2024-12-13 08:29:34.830739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.835 [2024-12-13 08:29:35.022305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.835 [2024-12-13 08:29:35.022360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.094 [2024-12-13 08:29:35.411232] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.094 [2024-12-13 08:29:35.411301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.094 [2024-12-13 08:29:35.411311] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.094 [2024-12-13 08:29:35.411321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.094 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.353 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.353 "name": "Existed_Raid", 00:18:23.353 "uuid": "bedadf33-502c-4e3e-941e-6a44806df5a0", 00:18:23.353 "strip_size_kb": 0, 00:18:23.353 "state": "configuring", 00:18:23.353 "raid_level": "raid1", 00:18:23.353 "superblock": true, 00:18:23.353 "num_base_bdevs": 2, 00:18:23.353 "num_base_bdevs_discovered": 0, 00:18:23.353 "num_base_bdevs_operational": 2, 00:18:23.353 "base_bdevs_list": [ 00:18:23.353 { 00:18:23.353 "name": "BaseBdev1", 00:18:23.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.353 "is_configured": false, 00:18:23.353 "data_offset": 0, 00:18:23.353 "data_size": 0 00:18:23.353 }, 00:18:23.353 { 00:18:23.353 "name": "BaseBdev2", 00:18:23.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.353 "is_configured": false, 00:18:23.353 "data_offset": 0, 00:18:23.353 "data_size": 0 00:18:23.353 } 00:18:23.353 ] 00:18:23.353 }' 00:18:23.353 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.353 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.613 [2024-12-13 08:29:35.850378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.613 [2024-12-13 08:29:35.850419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.613 [2024-12-13 08:29:35.862347] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.613 [2024-12-13 08:29:35.862399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.613 [2024-12-13 08:29:35.862407] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.613 [2024-12-13 08:29:35.862433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.613 [2024-12-13 08:29:35.909663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.613 BaseBdev1 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.613 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.613 [ 00:18:23.613 { 00:18:23.613 "name": "BaseBdev1", 00:18:23.613 "aliases": [ 00:18:23.613 "3bd2a31e-dd9f-43f3-8b7e-f36ab51ede0b" 00:18:23.613 ], 00:18:23.613 "product_name": "Malloc disk", 00:18:23.613 "block_size": 4096, 00:18:23.613 "num_blocks": 8192, 00:18:23.613 "uuid": "3bd2a31e-dd9f-43f3-8b7e-f36ab51ede0b", 00:18:23.613 "md_size": 32, 00:18:23.613 "md_interleave": false, 00:18:23.613 "dif_type": 0, 00:18:23.613 "assigned_rate_limits": { 00:18:23.613 "rw_ios_per_sec": 0, 00:18:23.613 "rw_mbytes_per_sec": 0, 00:18:23.613 "r_mbytes_per_sec": 0, 00:18:23.613 "w_mbytes_per_sec": 0 00:18:23.613 }, 00:18:23.613 "claimed": true, 00:18:23.613 "claim_type": "exclusive_write", 00:18:23.613 "zoned": false, 00:18:23.613 "supported_io_types": { 00:18:23.613 "read": true, 00:18:23.613 "write": true, 00:18:23.613 "unmap": true, 00:18:23.613 "flush": true, 00:18:23.613 "reset": true, 00:18:23.613 "nvme_admin": false, 00:18:23.613 "nvme_io": false, 00:18:23.613 "nvme_io_md": false, 00:18:23.613 "write_zeroes": true, 00:18:23.613 "zcopy": true, 00:18:23.613 "get_zone_info": false, 00:18:23.613 "zone_management": false, 00:18:23.613 "zone_append": false, 00:18:23.613 "compare": false, 00:18:23.613 "compare_and_write": false, 00:18:23.613 "abort": true, 00:18:23.613 "seek_hole": false, 00:18:23.613 "seek_data": false, 00:18:23.613 "copy": true, 00:18:23.613 "nvme_iov_md": false 00:18:23.613 }, 00:18:23.613 "memory_domains": [ 00:18:23.613 { 00:18:23.613 "dma_device_id": "system", 00:18:23.613 "dma_device_type": 1 00:18:23.613 }, 00:18:23.614 { 00:18:23.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.614 "dma_device_type": 2 00:18:23.614 } 00:18:23.614 ], 00:18:23.614 "driver_specific": {} 00:18:23.614 } 00:18:23.614 ] 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.614 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.873 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.873 "name": "Existed_Raid", 00:18:23.873 "uuid": "5ea5ccaf-e42e-483e-9b80-08122edb3f71", 00:18:23.873 "strip_size_kb": 0, 00:18:23.873 "state": "configuring", 00:18:23.873 "raid_level": "raid1", 00:18:23.873 "superblock": true, 00:18:23.873 "num_base_bdevs": 2, 00:18:23.873 "num_base_bdevs_discovered": 1, 00:18:23.873 "num_base_bdevs_operational": 2, 00:18:23.873 "base_bdevs_list": [ 00:18:23.873 { 00:18:23.873 "name": "BaseBdev1", 00:18:23.873 "uuid": "3bd2a31e-dd9f-43f3-8b7e-f36ab51ede0b", 00:18:23.873 "is_configured": true, 00:18:23.873 "data_offset": 256, 00:18:23.873 "data_size": 7936 00:18:23.873 }, 00:18:23.873 { 00:18:23.873 "name": "BaseBdev2", 00:18:23.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.873 "is_configured": false, 00:18:23.873 "data_offset": 0, 00:18:23.873 "data_size": 0 00:18:23.873 } 00:18:23.873 ] 00:18:23.873 }' 00:18:23.873 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.873 08:29:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.133 [2024-12-13 08:29:36.392942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.133 [2024-12-13 08:29:36.393004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.133 [2024-12-13 08:29:36.404958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.133 [2024-12-13 08:29:36.406774] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.133 [2024-12-13 08:29:36.406818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.133 "name": "Existed_Raid", 00:18:24.133 "uuid": "e39f46ef-d3d0-47a1-95b6-bc7402dfbe0f", 00:18:24.133 "strip_size_kb": 0, 00:18:24.133 "state": "configuring", 00:18:24.133 "raid_level": "raid1", 00:18:24.133 "superblock": true, 00:18:24.133 "num_base_bdevs": 2, 00:18:24.133 "num_base_bdevs_discovered": 1, 00:18:24.133 "num_base_bdevs_operational": 2, 00:18:24.133 "base_bdevs_list": [ 00:18:24.133 { 00:18:24.133 "name": "BaseBdev1", 00:18:24.133 "uuid": "3bd2a31e-dd9f-43f3-8b7e-f36ab51ede0b", 00:18:24.133 "is_configured": true, 00:18:24.133 "data_offset": 256, 00:18:24.133 "data_size": 7936 00:18:24.133 }, 00:18:24.133 { 00:18:24.133 "name": "BaseBdev2", 00:18:24.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.133 "is_configured": false, 00:18:24.133 "data_offset": 0, 00:18:24.133 "data_size": 0 00:18:24.133 } 00:18:24.133 ] 00:18:24.133 }' 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.133 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.702 [2024-12-13 08:29:36.923211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.702 [2024-12-13 08:29:36.923469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:24.702 [2024-12-13 08:29:36.923503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:24.702 [2024-12-13 08:29:36.923580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:24.702 [2024-12-13 08:29:36.923709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:24.702 [2024-12-13 08:29:36.923730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:24.702 [2024-12-13 08:29:36.923839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.702 BaseBdev2 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:24.702 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.703 [ 00:18:24.703 { 00:18:24.703 "name": "BaseBdev2", 00:18:24.703 "aliases": [ 00:18:24.703 "772ed65e-6354-44ab-8508-9fff17826bc3" 00:18:24.703 ], 00:18:24.703 "product_name": "Malloc disk", 00:18:24.703 "block_size": 4096, 00:18:24.703 "num_blocks": 8192, 00:18:24.703 "uuid": "772ed65e-6354-44ab-8508-9fff17826bc3", 00:18:24.703 "md_size": 32, 00:18:24.703 "md_interleave": false, 00:18:24.703 "dif_type": 0, 00:18:24.703 "assigned_rate_limits": { 00:18:24.703 "rw_ios_per_sec": 0, 00:18:24.703 "rw_mbytes_per_sec": 0, 00:18:24.703 "r_mbytes_per_sec": 0, 00:18:24.703 "w_mbytes_per_sec": 0 00:18:24.703 }, 00:18:24.703 "claimed": true, 00:18:24.703 "claim_type": "exclusive_write", 00:18:24.703 "zoned": false, 00:18:24.703 "supported_io_types": { 00:18:24.703 "read": true, 00:18:24.703 "write": true, 00:18:24.703 "unmap": true, 00:18:24.703 "flush": true, 00:18:24.703 "reset": true, 00:18:24.703 "nvme_admin": false, 00:18:24.703 "nvme_io": false, 00:18:24.703 "nvme_io_md": false, 00:18:24.703 "write_zeroes": true, 00:18:24.703 "zcopy": true, 00:18:24.703 "get_zone_info": false, 00:18:24.703 "zone_management": false, 00:18:24.703 "zone_append": false, 00:18:24.703 "compare": false, 00:18:24.703 "compare_and_write": false, 00:18:24.703 "abort": true, 00:18:24.703 "seek_hole": false, 00:18:24.703 "seek_data": false, 00:18:24.703 "copy": true, 00:18:24.703 "nvme_iov_md": false 00:18:24.703 }, 00:18:24.703 "memory_domains": [ 00:18:24.703 { 00:18:24.703 "dma_device_id": "system", 00:18:24.703 "dma_device_type": 1 00:18:24.703 }, 00:18:24.703 { 00:18:24.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.703 "dma_device_type": 2 00:18:24.703 } 00:18:24.703 ], 00:18:24.703 "driver_specific": {} 00:18:24.703 } 00:18:24.703 ] 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.703 08:29:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.703 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.703 "name": "Existed_Raid", 00:18:24.703 "uuid": "e39f46ef-d3d0-47a1-95b6-bc7402dfbe0f", 00:18:24.703 "strip_size_kb": 0, 00:18:24.703 "state": "online", 00:18:24.703 "raid_level": "raid1", 00:18:24.703 "superblock": true, 00:18:24.703 "num_base_bdevs": 2, 00:18:24.703 "num_base_bdevs_discovered": 2, 00:18:24.703 "num_base_bdevs_operational": 2, 00:18:24.703 "base_bdevs_list": [ 00:18:24.703 { 00:18:24.703 "name": "BaseBdev1", 00:18:24.703 "uuid": "3bd2a31e-dd9f-43f3-8b7e-f36ab51ede0b", 00:18:24.703 "is_configured": true, 00:18:24.703 "data_offset": 256, 00:18:24.703 "data_size": 7936 00:18:24.703 }, 00:18:24.703 { 00:18:24.703 "name": "BaseBdev2", 00:18:24.703 "uuid": "772ed65e-6354-44ab-8508-9fff17826bc3", 00:18:24.703 "is_configured": true, 00:18:24.703 "data_offset": 256, 00:18:24.703 "data_size": 7936 00:18:24.703 } 00:18:24.703 ] 00:18:24.703 }' 00:18:24.703 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.703 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.272 [2024-12-13 08:29:37.438754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:25.272 "name": "Existed_Raid", 00:18:25.272 "aliases": [ 00:18:25.272 "e39f46ef-d3d0-47a1-95b6-bc7402dfbe0f" 00:18:25.272 ], 00:18:25.272 "product_name": "Raid Volume", 00:18:25.272 "block_size": 4096, 00:18:25.272 "num_blocks": 7936, 00:18:25.272 "uuid": "e39f46ef-d3d0-47a1-95b6-bc7402dfbe0f", 00:18:25.272 "md_size": 32, 00:18:25.272 "md_interleave": false, 00:18:25.272 "dif_type": 0, 00:18:25.272 "assigned_rate_limits": { 00:18:25.272 "rw_ios_per_sec": 0, 00:18:25.272 "rw_mbytes_per_sec": 0, 00:18:25.272 "r_mbytes_per_sec": 0, 00:18:25.272 "w_mbytes_per_sec": 0 00:18:25.272 }, 00:18:25.272 "claimed": false, 00:18:25.272 "zoned": false, 00:18:25.272 "supported_io_types": { 00:18:25.272 "read": true, 00:18:25.272 "write": true, 00:18:25.272 "unmap": false, 00:18:25.272 "flush": false, 00:18:25.272 "reset": true, 00:18:25.272 "nvme_admin": false, 00:18:25.272 "nvme_io": false, 00:18:25.272 "nvme_io_md": false, 00:18:25.272 "write_zeroes": true, 00:18:25.272 "zcopy": false, 00:18:25.272 "get_zone_info": false, 00:18:25.272 "zone_management": false, 00:18:25.272 "zone_append": false, 00:18:25.272 "compare": false, 00:18:25.272 "compare_and_write": false, 00:18:25.272 "abort": false, 00:18:25.272 "seek_hole": false, 00:18:25.272 "seek_data": false, 00:18:25.272 "copy": false, 00:18:25.272 "nvme_iov_md": false 00:18:25.272 }, 00:18:25.272 "memory_domains": [ 00:18:25.272 { 00:18:25.272 "dma_device_id": "system", 00:18:25.272 "dma_device_type": 1 00:18:25.272 }, 00:18:25.272 { 00:18:25.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.272 "dma_device_type": 2 00:18:25.272 }, 00:18:25.272 { 00:18:25.272 "dma_device_id": "system", 00:18:25.272 "dma_device_type": 1 00:18:25.272 }, 00:18:25.272 { 00:18:25.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.272 "dma_device_type": 2 00:18:25.272 } 00:18:25.272 ], 00:18:25.272 "driver_specific": { 00:18:25.272 "raid": { 00:18:25.272 "uuid": "e39f46ef-d3d0-47a1-95b6-bc7402dfbe0f", 00:18:25.272 "strip_size_kb": 0, 00:18:25.272 "state": "online", 00:18:25.272 "raid_level": "raid1", 00:18:25.272 "superblock": true, 00:18:25.272 "num_base_bdevs": 2, 00:18:25.272 "num_base_bdevs_discovered": 2, 00:18:25.272 "num_base_bdevs_operational": 2, 00:18:25.272 "base_bdevs_list": [ 00:18:25.272 { 00:18:25.272 "name": "BaseBdev1", 00:18:25.272 "uuid": "3bd2a31e-dd9f-43f3-8b7e-f36ab51ede0b", 00:18:25.272 "is_configured": true, 00:18:25.272 "data_offset": 256, 00:18:25.272 "data_size": 7936 00:18:25.272 }, 00:18:25.272 { 00:18:25.272 "name": "BaseBdev2", 00:18:25.272 "uuid": "772ed65e-6354-44ab-8508-9fff17826bc3", 00:18:25.272 "is_configured": true, 00:18:25.272 "data_offset": 256, 00:18:25.272 "data_size": 7936 00:18:25.272 } 00:18:25.272 ] 00:18:25.272 } 00:18:25.272 } 00:18:25.272 }' 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:25.272 BaseBdev2' 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:25.272 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.273 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.532 [2024-12-13 08:29:37.662076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.532 "name": "Existed_Raid", 00:18:25.532 "uuid": "e39f46ef-d3d0-47a1-95b6-bc7402dfbe0f", 00:18:25.532 "strip_size_kb": 0, 00:18:25.532 "state": "online", 00:18:25.532 "raid_level": "raid1", 00:18:25.532 "superblock": true, 00:18:25.532 "num_base_bdevs": 2, 00:18:25.532 "num_base_bdevs_discovered": 1, 00:18:25.532 "num_base_bdevs_operational": 1, 00:18:25.532 "base_bdevs_list": [ 00:18:25.532 { 00:18:25.532 "name": null, 00:18:25.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.532 "is_configured": false, 00:18:25.532 "data_offset": 0, 00:18:25.532 "data_size": 7936 00:18:25.532 }, 00:18:25.532 { 00:18:25.532 "name": "BaseBdev2", 00:18:25.532 "uuid": "772ed65e-6354-44ab-8508-9fff17826bc3", 00:18:25.532 "is_configured": true, 00:18:25.532 "data_offset": 256, 00:18:25.532 "data_size": 7936 00:18:25.532 } 00:18:25.532 ] 00:18:25.532 }' 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.532 08:29:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.101 [2024-12-13 08:29:38.234074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:26.101 [2024-12-13 08:29:38.234197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.101 [2024-12-13 08:29:38.338252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.101 [2024-12-13 08:29:38.338306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.101 [2024-12-13 08:29:38.338328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87356 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87356 ']' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87356 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87356 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.101 killing process with pid 87356 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87356' 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87356 00:18:26.101 [2024-12-13 08:29:38.429792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.101 08:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87356 00:18:26.101 [2024-12-13 08:29:38.446362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.480 08:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:27.480 00:18:27.480 real 0m5.118s 00:18:27.480 user 0m7.391s 00:18:27.480 sys 0m0.860s 00:18:27.480 08:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.480 08:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.480 ************************************ 00:18:27.480 END TEST raid_state_function_test_sb_md_separate 00:18:27.480 ************************************ 00:18:27.480 08:29:39 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:27.480 08:29:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:27.480 08:29:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.480 08:29:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.480 ************************************ 00:18:27.480 START TEST raid_superblock_test_md_separate 00:18:27.480 ************************************ 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87608 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87608 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87608 ']' 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.480 08:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.480 [2024-12-13 08:29:39.718025] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:27.480 [2024-12-13 08:29:39.718167] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87608 ] 00:18:27.738 [2024-12-13 08:29:39.873336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.738 [2024-12-13 08:29:39.984969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.997 [2024-12-13 08:29:40.179038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.997 [2024-12-13 08:29:40.179113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.256 malloc1 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.256 [2024-12-13 08:29:40.599563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.256 [2024-12-13 08:29:40.599627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.256 [2024-12-13 08:29:40.599649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:28.256 [2024-12-13 08:29:40.599658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.256 [2024-12-13 08:29:40.601567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.256 [2024-12-13 08:29:40.601606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.256 pt1 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:28.256 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.257 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.516 malloc2 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.516 [2024-12-13 08:29:40.655357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.516 [2024-12-13 08:29:40.655418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.516 [2024-12-13 08:29:40.655439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:28.516 [2024-12-13 08:29:40.655448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.516 [2024-12-13 08:29:40.657433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.516 [2024-12-13 08:29:40.657470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.516 pt2 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.516 [2024-12-13 08:29:40.667376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.516 [2024-12-13 08:29:40.669324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.516 [2024-12-13 08:29:40.669504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.516 [2024-12-13 08:29:40.669523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:28.516 [2024-12-13 08:29:40.669596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:28.516 [2024-12-13 08:29:40.669726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.516 [2024-12-13 08:29:40.669744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:28.516 [2024-12-13 08:29:40.669867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.516 "name": "raid_bdev1", 00:18:28.516 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:28.516 "strip_size_kb": 0, 00:18:28.516 "state": "online", 00:18:28.516 "raid_level": "raid1", 00:18:28.516 "superblock": true, 00:18:28.516 "num_base_bdevs": 2, 00:18:28.516 "num_base_bdevs_discovered": 2, 00:18:28.516 "num_base_bdevs_operational": 2, 00:18:28.516 "base_bdevs_list": [ 00:18:28.516 { 00:18:28.516 "name": "pt1", 00:18:28.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.516 "is_configured": true, 00:18:28.516 "data_offset": 256, 00:18:28.516 "data_size": 7936 00:18:28.516 }, 00:18:28.516 { 00:18:28.516 "name": "pt2", 00:18:28.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.516 "is_configured": true, 00:18:28.516 "data_offset": 256, 00:18:28.516 "data_size": 7936 00:18:28.516 } 00:18:28.516 ] 00:18:28.516 }' 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.516 08:29:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.775 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:28.775 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:28.775 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:28.775 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:28.775 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:28.775 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:28.775 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.776 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:28.776 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.776 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.776 [2024-12-13 08:29:41.110938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.776 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.034 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.034 "name": "raid_bdev1", 00:18:29.034 "aliases": [ 00:18:29.034 "82b3bcde-60ec-4d5e-8073-62357d7927d7" 00:18:29.034 ], 00:18:29.034 "product_name": "Raid Volume", 00:18:29.034 "block_size": 4096, 00:18:29.034 "num_blocks": 7936, 00:18:29.034 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:29.034 "md_size": 32, 00:18:29.034 "md_interleave": false, 00:18:29.034 "dif_type": 0, 00:18:29.034 "assigned_rate_limits": { 00:18:29.034 "rw_ios_per_sec": 0, 00:18:29.034 "rw_mbytes_per_sec": 0, 00:18:29.034 "r_mbytes_per_sec": 0, 00:18:29.034 "w_mbytes_per_sec": 0 00:18:29.034 }, 00:18:29.034 "claimed": false, 00:18:29.034 "zoned": false, 00:18:29.034 "supported_io_types": { 00:18:29.034 "read": true, 00:18:29.034 "write": true, 00:18:29.034 "unmap": false, 00:18:29.034 "flush": false, 00:18:29.034 "reset": true, 00:18:29.034 "nvme_admin": false, 00:18:29.034 "nvme_io": false, 00:18:29.034 "nvme_io_md": false, 00:18:29.034 "write_zeroes": true, 00:18:29.034 "zcopy": false, 00:18:29.034 "get_zone_info": false, 00:18:29.034 "zone_management": false, 00:18:29.034 "zone_append": false, 00:18:29.034 "compare": false, 00:18:29.034 "compare_and_write": false, 00:18:29.034 "abort": false, 00:18:29.034 "seek_hole": false, 00:18:29.034 "seek_data": false, 00:18:29.034 "copy": false, 00:18:29.034 "nvme_iov_md": false 00:18:29.035 }, 00:18:29.035 "memory_domains": [ 00:18:29.035 { 00:18:29.035 "dma_device_id": "system", 00:18:29.035 "dma_device_type": 1 00:18:29.035 }, 00:18:29.035 { 00:18:29.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.035 "dma_device_type": 2 00:18:29.035 }, 00:18:29.035 { 00:18:29.035 "dma_device_id": "system", 00:18:29.035 "dma_device_type": 1 00:18:29.035 }, 00:18:29.035 { 00:18:29.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.035 "dma_device_type": 2 00:18:29.035 } 00:18:29.035 ], 00:18:29.035 "driver_specific": { 00:18:29.035 "raid": { 00:18:29.035 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:29.035 "strip_size_kb": 0, 00:18:29.035 "state": "online", 00:18:29.035 "raid_level": "raid1", 00:18:29.035 "superblock": true, 00:18:29.035 "num_base_bdevs": 2, 00:18:29.035 "num_base_bdevs_discovered": 2, 00:18:29.035 "num_base_bdevs_operational": 2, 00:18:29.035 "base_bdevs_list": [ 00:18:29.035 { 00:18:29.035 "name": "pt1", 00:18:29.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.035 "is_configured": true, 00:18:29.035 "data_offset": 256, 00:18:29.035 "data_size": 7936 00:18:29.035 }, 00:18:29.035 { 00:18:29.035 "name": "pt2", 00:18:29.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.035 "is_configured": true, 00:18:29.035 "data_offset": 256, 00:18:29.035 "data_size": 7936 00:18:29.035 } 00:18:29.035 ] 00:18:29.035 } 00:18:29.035 } 00:18:29.035 }' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:29.035 pt2' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:29.035 [2024-12-13 08:29:41.358421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.035 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=82b3bcde-60ec-4d5e-8073-62357d7927d7 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 82b3bcde-60ec-4d5e-8073-62357d7927d7 ']' 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.294 [2024-12-13 08:29:41.406047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.294 [2024-12-13 08:29:41.406077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.294 [2024-12-13 08:29:41.406188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.294 [2024-12-13 08:29:41.406247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.294 [2024-12-13 08:29:41.406260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.294 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 [2024-12-13 08:29:41.533842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:29.295 [2024-12-13 08:29:41.535795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:29.295 [2024-12-13 08:29:41.535921] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:29.295 [2024-12-13 08:29:41.536019] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:29.295 [2024-12-13 08:29:41.536089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.295 [2024-12-13 08:29:41.536131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:29.295 request: 00:18:29.295 { 00:18:29.295 "name": "raid_bdev1", 00:18:29.295 "raid_level": "raid1", 00:18:29.295 "base_bdevs": [ 00:18:29.295 "malloc1", 00:18:29.295 "malloc2" 00:18:29.295 ], 00:18:29.295 "superblock": false, 00:18:29.295 "method": "bdev_raid_create", 00:18:29.295 "req_id": 1 00:18:29.295 } 00:18:29.295 Got JSON-RPC error response 00:18:29.295 response: 00:18:29.295 { 00:18:29.295 "code": -17, 00:18:29.295 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:29.295 } 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 [2024-12-13 08:29:41.601702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:29.295 [2024-12-13 08:29:41.601793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.295 [2024-12-13 08:29:41.601834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:29.295 [2024-12-13 08:29:41.601866] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.295 [2024-12-13 08:29:41.603796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.295 [2024-12-13 08:29:41.603889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:29.295 [2024-12-13 08:29:41.603960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:29.295 [2024-12-13 08:29:41.604053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:29.295 pt1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.295 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.295 "name": "raid_bdev1", 00:18:29.295 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:29.295 "strip_size_kb": 0, 00:18:29.295 "state": "configuring", 00:18:29.295 "raid_level": "raid1", 00:18:29.295 "superblock": true, 00:18:29.295 "num_base_bdevs": 2, 00:18:29.295 "num_base_bdevs_discovered": 1, 00:18:29.295 "num_base_bdevs_operational": 2, 00:18:29.295 "base_bdevs_list": [ 00:18:29.295 { 00:18:29.295 "name": "pt1", 00:18:29.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.295 "is_configured": true, 00:18:29.295 "data_offset": 256, 00:18:29.295 "data_size": 7936 00:18:29.295 }, 00:18:29.295 { 00:18:29.296 "name": null, 00:18:29.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.296 "is_configured": false, 00:18:29.296 "data_offset": 256, 00:18:29.296 "data_size": 7936 00:18:29.296 } 00:18:29.296 ] 00:18:29.296 }' 00:18:29.296 08:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.296 08:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.864 [2024-12-13 08:29:42.036975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.864 [2024-12-13 08:29:42.037059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.864 [2024-12-13 08:29:42.037081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:29.864 [2024-12-13 08:29:42.037092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.864 [2024-12-13 08:29:42.037363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.864 [2024-12-13 08:29:42.037387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.864 [2024-12-13 08:29:42.037444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:29.864 [2024-12-13 08:29:42.037469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.864 [2024-12-13 08:29:42.037589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:29.864 [2024-12-13 08:29:42.037601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:29.864 [2024-12-13 08:29:42.037680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:29.864 [2024-12-13 08:29:42.037815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:29.864 [2024-12-13 08:29:42.037824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:29.864 [2024-12-13 08:29:42.037925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.864 pt2 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.864 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.864 "name": "raid_bdev1", 00:18:29.864 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:29.864 "strip_size_kb": 0, 00:18:29.865 "state": "online", 00:18:29.865 "raid_level": "raid1", 00:18:29.865 "superblock": true, 00:18:29.865 "num_base_bdevs": 2, 00:18:29.865 "num_base_bdevs_discovered": 2, 00:18:29.865 "num_base_bdevs_operational": 2, 00:18:29.865 "base_bdevs_list": [ 00:18:29.865 { 00:18:29.865 "name": "pt1", 00:18:29.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.865 "is_configured": true, 00:18:29.865 "data_offset": 256, 00:18:29.865 "data_size": 7936 00:18:29.865 }, 00:18:29.865 { 00:18:29.865 "name": "pt2", 00:18:29.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.865 "is_configured": true, 00:18:29.865 "data_offset": 256, 00:18:29.865 "data_size": 7936 00:18:29.865 } 00:18:29.865 ] 00:18:29.865 }' 00:18:29.865 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.865 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:30.433 [2024-12-13 08:29:42.528386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.433 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.433 "name": "raid_bdev1", 00:18:30.433 "aliases": [ 00:18:30.433 "82b3bcde-60ec-4d5e-8073-62357d7927d7" 00:18:30.433 ], 00:18:30.433 "product_name": "Raid Volume", 00:18:30.433 "block_size": 4096, 00:18:30.433 "num_blocks": 7936, 00:18:30.433 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:30.433 "md_size": 32, 00:18:30.433 "md_interleave": false, 00:18:30.433 "dif_type": 0, 00:18:30.433 "assigned_rate_limits": { 00:18:30.433 "rw_ios_per_sec": 0, 00:18:30.433 "rw_mbytes_per_sec": 0, 00:18:30.433 "r_mbytes_per_sec": 0, 00:18:30.433 "w_mbytes_per_sec": 0 00:18:30.433 }, 00:18:30.433 "claimed": false, 00:18:30.433 "zoned": false, 00:18:30.433 "supported_io_types": { 00:18:30.433 "read": true, 00:18:30.433 "write": true, 00:18:30.433 "unmap": false, 00:18:30.433 "flush": false, 00:18:30.433 "reset": true, 00:18:30.433 "nvme_admin": false, 00:18:30.433 "nvme_io": false, 00:18:30.433 "nvme_io_md": false, 00:18:30.433 "write_zeroes": true, 00:18:30.433 "zcopy": false, 00:18:30.433 "get_zone_info": false, 00:18:30.433 "zone_management": false, 00:18:30.433 "zone_append": false, 00:18:30.433 "compare": false, 00:18:30.433 "compare_and_write": false, 00:18:30.433 "abort": false, 00:18:30.433 "seek_hole": false, 00:18:30.433 "seek_data": false, 00:18:30.433 "copy": false, 00:18:30.433 "nvme_iov_md": false 00:18:30.433 }, 00:18:30.433 "memory_domains": [ 00:18:30.433 { 00:18:30.433 "dma_device_id": "system", 00:18:30.433 "dma_device_type": 1 00:18:30.433 }, 00:18:30.433 { 00:18:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.433 "dma_device_type": 2 00:18:30.433 }, 00:18:30.433 { 00:18:30.433 "dma_device_id": "system", 00:18:30.433 "dma_device_type": 1 00:18:30.433 }, 00:18:30.433 { 00:18:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.433 "dma_device_type": 2 00:18:30.433 } 00:18:30.433 ], 00:18:30.433 "driver_specific": { 00:18:30.433 "raid": { 00:18:30.433 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:30.433 "strip_size_kb": 0, 00:18:30.433 "state": "online", 00:18:30.433 "raid_level": "raid1", 00:18:30.433 "superblock": true, 00:18:30.433 "num_base_bdevs": 2, 00:18:30.433 "num_base_bdevs_discovered": 2, 00:18:30.433 "num_base_bdevs_operational": 2, 00:18:30.433 "base_bdevs_list": [ 00:18:30.433 { 00:18:30.433 "name": "pt1", 00:18:30.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.433 "is_configured": true, 00:18:30.433 "data_offset": 256, 00:18:30.433 "data_size": 7936 00:18:30.433 }, 00:18:30.433 { 00:18:30.433 "name": "pt2", 00:18:30.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.433 "is_configured": true, 00:18:30.433 "data_offset": 256, 00:18:30.433 "data_size": 7936 00:18:30.434 } 00:18:30.434 ] 00:18:30.434 } 00:18:30.434 } 00:18:30.434 }' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:30.434 pt2' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 [2024-12-13 08:29:42.740028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 82b3bcde-60ec-4d5e-8073-62357d7927d7 '!=' 82b3bcde-60ec-4d5e-8073-62357d7927d7 ']' 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 [2024-12-13 08:29:42.783728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.434 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.693 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.693 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.693 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.693 "name": "raid_bdev1", 00:18:30.693 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:30.693 "strip_size_kb": 0, 00:18:30.693 "state": "online", 00:18:30.693 "raid_level": "raid1", 00:18:30.693 "superblock": true, 00:18:30.693 "num_base_bdevs": 2, 00:18:30.693 "num_base_bdevs_discovered": 1, 00:18:30.693 "num_base_bdevs_operational": 1, 00:18:30.693 "base_bdevs_list": [ 00:18:30.693 { 00:18:30.693 "name": null, 00:18:30.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.693 "is_configured": false, 00:18:30.693 "data_offset": 0, 00:18:30.693 "data_size": 7936 00:18:30.693 }, 00:18:30.693 { 00:18:30.693 "name": "pt2", 00:18:30.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.693 "is_configured": true, 00:18:30.693 "data_offset": 256, 00:18:30.693 "data_size": 7936 00:18:30.693 } 00:18:30.693 ] 00:18:30.693 }' 00:18:30.693 08:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.693 08:29:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.952 [2024-12-13 08:29:43.238945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.952 [2024-12-13 08:29:43.239027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.952 [2024-12-13 08:29:43.239132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.952 [2024-12-13 08:29:43.239216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.952 [2024-12-13 08:29:43.239274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:30.952 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.953 [2024-12-13 08:29:43.298822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.953 [2024-12-13 08:29:43.298879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.953 [2024-12-13 08:29:43.298894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:30.953 [2024-12-13 08:29:43.298905] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.953 [2024-12-13 08:29:43.300955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.953 [2024-12-13 08:29:43.301031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.953 [2024-12-13 08:29:43.301110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:30.953 [2024-12-13 08:29:43.301192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.953 [2024-12-13 08:29:43.301326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:30.953 [2024-12-13 08:29:43.301366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:30.953 [2024-12-13 08:29:43.301481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:30.953 [2024-12-13 08:29:43.301636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:30.953 [2024-12-13 08:29:43.301672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:30.953 [2024-12-13 08:29:43.301808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.953 pt2 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.953 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.210 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.210 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.210 "name": "raid_bdev1", 00:18:31.210 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:31.210 "strip_size_kb": 0, 00:18:31.210 "state": "online", 00:18:31.210 "raid_level": "raid1", 00:18:31.210 "superblock": true, 00:18:31.210 "num_base_bdevs": 2, 00:18:31.210 "num_base_bdevs_discovered": 1, 00:18:31.210 "num_base_bdevs_operational": 1, 00:18:31.210 "base_bdevs_list": [ 00:18:31.210 { 00:18:31.210 "name": null, 00:18:31.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.210 "is_configured": false, 00:18:31.211 "data_offset": 256, 00:18:31.211 "data_size": 7936 00:18:31.211 }, 00:18:31.211 { 00:18:31.211 "name": "pt2", 00:18:31.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.211 "is_configured": true, 00:18:31.211 "data_offset": 256, 00:18:31.211 "data_size": 7936 00:18:31.211 } 00:18:31.211 ] 00:18:31.211 }' 00:18:31.211 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.211 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.469 [2024-12-13 08:29:43.750065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.469 [2024-12-13 08:29:43.750171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.469 [2024-12-13 08:29:43.750292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.469 [2024-12-13 08:29:43.750362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.469 [2024-12-13 08:29:43.750417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.469 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.469 [2024-12-13 08:29:43.790020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.469 [2024-12-13 08:29:43.790126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.469 [2024-12-13 08:29:43.790187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:31.469 [2024-12-13 08:29:43.790224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.469 [2024-12-13 08:29:43.792396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.469 [2024-12-13 08:29:43.792471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.469 [2024-12-13 08:29:43.792552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.469 [2024-12-13 08:29:43.792645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.469 [2024-12-13 08:29:43.792830] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:31.469 [2024-12-13 08:29:43.792892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.469 [2024-12-13 08:29:43.792917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:31.469 [2024-12-13 08:29:43.793010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.469 [2024-12-13 08:29:43.793097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:31.469 [2024-12-13 08:29:43.793126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:31.470 [2024-12-13 08:29:43.793196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:31.470 [2024-12-13 08:29:43.793313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:31.470 [2024-12-13 08:29:43.793323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:31.470 [2024-12-13 08:29:43.793427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.470 pt1 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.470 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.728 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.728 "name": "raid_bdev1", 00:18:31.728 "uuid": "82b3bcde-60ec-4d5e-8073-62357d7927d7", 00:18:31.728 "strip_size_kb": 0, 00:18:31.728 "state": "online", 00:18:31.728 "raid_level": "raid1", 00:18:31.728 "superblock": true, 00:18:31.728 "num_base_bdevs": 2, 00:18:31.728 "num_base_bdevs_discovered": 1, 00:18:31.728 "num_base_bdevs_operational": 1, 00:18:31.728 "base_bdevs_list": [ 00:18:31.728 { 00:18:31.728 "name": null, 00:18:31.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.728 "is_configured": false, 00:18:31.728 "data_offset": 256, 00:18:31.728 "data_size": 7936 00:18:31.728 }, 00:18:31.728 { 00:18:31.728 "name": "pt2", 00:18:31.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.728 "is_configured": true, 00:18:31.728 "data_offset": 256, 00:18:31.728 "data_size": 7936 00:18:31.728 } 00:18:31.728 ] 00:18:31.728 }' 00:18:31.728 08:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.728 08:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.987 [2024-12-13 08:29:44.317386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.987 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 82b3bcde-60ec-4d5e-8073-62357d7927d7 '!=' 82b3bcde-60ec-4d5e-8073-62357d7927d7 ']' 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87608 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87608 ']' 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87608 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87608 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87608' 00:18:32.247 killing process with pid 87608 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87608 00:18:32.247 [2024-12-13 08:29:44.383711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.247 [2024-12-13 08:29:44.383857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.247 [2024-12-13 08:29:44.383912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.247 08:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87608 00:18:32.247 [2024-12-13 08:29:44.383929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:32.247 [2024-12-13 08:29:44.598091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.711 08:29:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:33.711 00:18:33.711 real 0m6.077s 00:18:33.711 user 0m9.254s 00:18:33.711 sys 0m1.060s 00:18:33.711 08:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.711 08:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.711 ************************************ 00:18:33.711 END TEST raid_superblock_test_md_separate 00:18:33.711 ************************************ 00:18:33.711 08:29:45 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:33.711 08:29:45 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:33.711 08:29:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:33.711 08:29:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.711 08:29:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.711 ************************************ 00:18:33.711 START TEST raid_rebuild_test_sb_md_separate 00:18:33.711 ************************************ 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87932 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87932 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87932 ']' 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.711 08:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.711 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:33.711 Zero copy mechanism will not be used. 00:18:33.711 [2024-12-13 08:29:45.886445] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:33.711 [2024-12-13 08:29:45.886584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87932 ] 00:18:33.994 [2024-12-13 08:29:46.060953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.994 [2024-12-13 08:29:46.177002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.252 [2024-12-13 08:29:46.376192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.252 [2024-12-13 08:29:46.376223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.511 BaseBdev1_malloc 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.511 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.512 [2024-12-13 08:29:46.762969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:34.512 [2024-12-13 08:29:46.763028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.512 [2024-12-13 08:29:46.763049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:34.512 [2024-12-13 08:29:46.763059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.512 [2024-12-13 08:29:46.764909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.512 [2024-12-13 08:29:46.764950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.512 BaseBdev1 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.512 BaseBdev2_malloc 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.512 [2024-12-13 08:29:46.817077] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:34.512 [2024-12-13 08:29:46.817165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.512 [2024-12-13 08:29:46.817186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:34.512 [2024-12-13 08:29:46.817198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.512 [2024-12-13 08:29:46.819042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.512 [2024-12-13 08:29:46.819179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.512 BaseBdev2 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.512 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.771 spare_malloc 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.771 spare_delay 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.771 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.771 [2024-12-13 08:29:46.893764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.771 [2024-12-13 08:29:46.893841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.772 [2024-12-13 08:29:46.893863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:34.772 [2024-12-13 08:29:46.893874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.772 [2024-12-13 08:29:46.895907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.772 [2024-12-13 08:29:46.895994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.772 spare 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.772 [2024-12-13 08:29:46.905791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.772 [2024-12-13 08:29:46.907639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.772 [2024-12-13 08:29:46.907830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:34.772 [2024-12-13 08:29:46.907847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:34.772 [2024-12-13 08:29:46.907918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:34.772 [2024-12-13 08:29:46.908047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:34.772 [2024-12-13 08:29:46.908057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:34.772 [2024-12-13 08:29:46.908172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.772 "name": "raid_bdev1", 00:18:34.772 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:34.772 "strip_size_kb": 0, 00:18:34.772 "state": "online", 00:18:34.772 "raid_level": "raid1", 00:18:34.772 "superblock": true, 00:18:34.772 "num_base_bdevs": 2, 00:18:34.772 "num_base_bdevs_discovered": 2, 00:18:34.772 "num_base_bdevs_operational": 2, 00:18:34.772 "base_bdevs_list": [ 00:18:34.772 { 00:18:34.772 "name": "BaseBdev1", 00:18:34.772 "uuid": "581df3bf-0d99-5031-9254-83cf920b4e1f", 00:18:34.772 "is_configured": true, 00:18:34.772 "data_offset": 256, 00:18:34.772 "data_size": 7936 00:18:34.772 }, 00:18:34.772 { 00:18:34.772 "name": "BaseBdev2", 00:18:34.772 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:34.772 "is_configured": true, 00:18:34.772 "data_offset": 256, 00:18:34.772 "data_size": 7936 00:18:34.772 } 00:18:34.772 ] 00:18:34.772 }' 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.772 08:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.031 [2024-12-13 08:29:47.321397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:35.031 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:35.032 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:35.291 [2024-12-13 08:29:47.564750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:35.291 /dev/nbd0 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:35.291 1+0 records in 00:18:35.291 1+0 records out 00:18:35.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519394 s, 7.9 MB/s 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:35.291 08:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:36.229 7936+0 records in 00:18:36.229 7936+0 records out 00:18:36.229 32505856 bytes (33 MB, 31 MiB) copied, 0.617949 s, 52.6 MB/s 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:36.229 [2024-12-13 08:29:48.461241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:36.229 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 [2024-12-13 08:29:48.473332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.230 "name": "raid_bdev1", 00:18:36.230 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:36.230 "strip_size_kb": 0, 00:18:36.230 "state": "online", 00:18:36.230 "raid_level": "raid1", 00:18:36.230 "superblock": true, 00:18:36.230 "num_base_bdevs": 2, 00:18:36.230 "num_base_bdevs_discovered": 1, 00:18:36.230 "num_base_bdevs_operational": 1, 00:18:36.230 "base_bdevs_list": [ 00:18:36.230 { 00:18:36.230 "name": null, 00:18:36.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.230 "is_configured": false, 00:18:36.230 "data_offset": 0, 00:18:36.230 "data_size": 7936 00:18:36.230 }, 00:18:36.230 { 00:18:36.230 "name": "BaseBdev2", 00:18:36.230 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:36.230 "is_configured": true, 00:18:36.230 "data_offset": 256, 00:18:36.230 "data_size": 7936 00:18:36.230 } 00:18:36.230 ] 00:18:36.230 }' 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.230 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.801 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:36.801 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.801 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.801 [2024-12-13 08:29:48.920592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.801 [2024-12-13 08:29:48.935616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:36.801 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.801 08:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:36.801 [2024-12-13 08:29:48.937395] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.740 "name": "raid_bdev1", 00:18:37.740 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:37.740 "strip_size_kb": 0, 00:18:37.740 "state": "online", 00:18:37.740 "raid_level": "raid1", 00:18:37.740 "superblock": true, 00:18:37.740 "num_base_bdevs": 2, 00:18:37.740 "num_base_bdevs_discovered": 2, 00:18:37.740 "num_base_bdevs_operational": 2, 00:18:37.740 "process": { 00:18:37.740 "type": "rebuild", 00:18:37.740 "target": "spare", 00:18:37.740 "progress": { 00:18:37.740 "blocks": 2560, 00:18:37.740 "percent": 32 00:18:37.740 } 00:18:37.740 }, 00:18:37.740 "base_bdevs_list": [ 00:18:37.740 { 00:18:37.740 "name": "spare", 00:18:37.740 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:37.740 "is_configured": true, 00:18:37.740 "data_offset": 256, 00:18:37.740 "data_size": 7936 00:18:37.740 }, 00:18:37.740 { 00:18:37.740 "name": "BaseBdev2", 00:18:37.740 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:37.740 "is_configured": true, 00:18:37.740 "data_offset": 256, 00:18:37.740 "data_size": 7936 00:18:37.740 } 00:18:37.740 ] 00:18:37.740 }' 00:18:37.740 08:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.740 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.740 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.740 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.740 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:37.740 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.740 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 [2024-12-13 08:29:50.101049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.999 [2024-12-13 08:29:50.142782] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.999 [2024-12-13 08:29:50.142844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.999 [2024-12-13 08:29:50.142859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.999 [2024-12-13 08:29:50.142869] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.999 "name": "raid_bdev1", 00:18:37.999 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:37.999 "strip_size_kb": 0, 00:18:37.999 "state": "online", 00:18:37.999 "raid_level": "raid1", 00:18:37.999 "superblock": true, 00:18:37.999 "num_base_bdevs": 2, 00:18:37.999 "num_base_bdevs_discovered": 1, 00:18:37.999 "num_base_bdevs_operational": 1, 00:18:37.999 "base_bdevs_list": [ 00:18:37.999 { 00:18:37.999 "name": null, 00:18:37.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.999 "is_configured": false, 00:18:37.999 "data_offset": 0, 00:18:37.999 "data_size": 7936 00:18:37.999 }, 00:18:37.999 { 00:18:37.999 "name": "BaseBdev2", 00:18:37.999 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:37.999 "is_configured": true, 00:18:37.999 "data_offset": 256, 00:18:37.999 "data_size": 7936 00:18:37.999 } 00:18:37.999 ] 00:18:37.999 }' 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.999 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.568 "name": "raid_bdev1", 00:18:38.568 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:38.568 "strip_size_kb": 0, 00:18:38.568 "state": "online", 00:18:38.568 "raid_level": "raid1", 00:18:38.568 "superblock": true, 00:18:38.568 "num_base_bdevs": 2, 00:18:38.568 "num_base_bdevs_discovered": 1, 00:18:38.568 "num_base_bdevs_operational": 1, 00:18:38.568 "base_bdevs_list": [ 00:18:38.568 { 00:18:38.568 "name": null, 00:18:38.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.568 "is_configured": false, 00:18:38.568 "data_offset": 0, 00:18:38.568 "data_size": 7936 00:18:38.568 }, 00:18:38.568 { 00:18:38.568 "name": "BaseBdev2", 00:18:38.568 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:38.568 "is_configured": true, 00:18:38.568 "data_offset": 256, 00:18:38.568 "data_size": 7936 00:18:38.568 } 00:18:38.568 ] 00:18:38.568 }' 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.568 [2024-12-13 08:29:50.777548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.568 [2024-12-13 08:29:50.791093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.568 08:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:38.568 [2024-12-13 08:29:50.792975] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.506 "name": "raid_bdev1", 00:18:39.506 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:39.506 "strip_size_kb": 0, 00:18:39.506 "state": "online", 00:18:39.506 "raid_level": "raid1", 00:18:39.506 "superblock": true, 00:18:39.506 "num_base_bdevs": 2, 00:18:39.506 "num_base_bdevs_discovered": 2, 00:18:39.506 "num_base_bdevs_operational": 2, 00:18:39.506 "process": { 00:18:39.506 "type": "rebuild", 00:18:39.506 "target": "spare", 00:18:39.506 "progress": { 00:18:39.506 "blocks": 2560, 00:18:39.506 "percent": 32 00:18:39.506 } 00:18:39.506 }, 00:18:39.506 "base_bdevs_list": [ 00:18:39.506 { 00:18:39.506 "name": "spare", 00:18:39.506 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:39.506 "is_configured": true, 00:18:39.506 "data_offset": 256, 00:18:39.506 "data_size": 7936 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "name": "BaseBdev2", 00:18:39.506 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:39.506 "is_configured": true, 00:18:39.506 "data_offset": 256, 00:18:39.506 "data_size": 7936 00:18:39.506 } 00:18:39.506 ] 00:18:39.506 }' 00:18:39.506 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:39.765 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=712 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.765 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.765 "name": "raid_bdev1", 00:18:39.765 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:39.765 "strip_size_kb": 0, 00:18:39.766 "state": "online", 00:18:39.766 "raid_level": "raid1", 00:18:39.766 "superblock": true, 00:18:39.766 "num_base_bdevs": 2, 00:18:39.766 "num_base_bdevs_discovered": 2, 00:18:39.766 "num_base_bdevs_operational": 2, 00:18:39.766 "process": { 00:18:39.766 "type": "rebuild", 00:18:39.766 "target": "spare", 00:18:39.766 "progress": { 00:18:39.766 "blocks": 2816, 00:18:39.766 "percent": 35 00:18:39.766 } 00:18:39.766 }, 00:18:39.766 "base_bdevs_list": [ 00:18:39.766 { 00:18:39.766 "name": "spare", 00:18:39.766 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:39.766 "is_configured": true, 00:18:39.766 "data_offset": 256, 00:18:39.766 "data_size": 7936 00:18:39.766 }, 00:18:39.766 { 00:18:39.766 "name": "BaseBdev2", 00:18:39.766 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:39.766 "is_configured": true, 00:18:39.766 "data_offset": 256, 00:18:39.766 "data_size": 7936 00:18:39.766 } 00:18:39.766 ] 00:18:39.766 }' 00:18:39.766 08:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.766 08:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.766 08:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.766 08:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.766 08:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.144 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.144 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.144 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.144 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.144 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.145 "name": "raid_bdev1", 00:18:41.145 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:41.145 "strip_size_kb": 0, 00:18:41.145 "state": "online", 00:18:41.145 "raid_level": "raid1", 00:18:41.145 "superblock": true, 00:18:41.145 "num_base_bdevs": 2, 00:18:41.145 "num_base_bdevs_discovered": 2, 00:18:41.145 "num_base_bdevs_operational": 2, 00:18:41.145 "process": { 00:18:41.145 "type": "rebuild", 00:18:41.145 "target": "spare", 00:18:41.145 "progress": { 00:18:41.145 "blocks": 5888, 00:18:41.145 "percent": 74 00:18:41.145 } 00:18:41.145 }, 00:18:41.145 "base_bdevs_list": [ 00:18:41.145 { 00:18:41.145 "name": "spare", 00:18:41.145 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:41.145 "is_configured": true, 00:18:41.145 "data_offset": 256, 00:18:41.145 "data_size": 7936 00:18:41.145 }, 00:18:41.145 { 00:18:41.145 "name": "BaseBdev2", 00:18:41.145 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:41.145 "is_configured": true, 00:18:41.145 "data_offset": 256, 00:18:41.145 "data_size": 7936 00:18:41.145 } 00:18:41.145 ] 00:18:41.145 }' 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.145 08:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.713 [2024-12-13 08:29:53.906609] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:41.714 [2024-12-13 08:29:53.906689] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:41.714 [2024-12-13 08:29:53.906804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.973 "name": "raid_bdev1", 00:18:41.973 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:41.973 "strip_size_kb": 0, 00:18:41.973 "state": "online", 00:18:41.973 "raid_level": "raid1", 00:18:41.973 "superblock": true, 00:18:41.973 "num_base_bdevs": 2, 00:18:41.973 "num_base_bdevs_discovered": 2, 00:18:41.973 "num_base_bdevs_operational": 2, 00:18:41.973 "base_bdevs_list": [ 00:18:41.973 { 00:18:41.973 "name": "spare", 00:18:41.973 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:41.973 "is_configured": true, 00:18:41.973 "data_offset": 256, 00:18:41.973 "data_size": 7936 00:18:41.973 }, 00:18:41.973 { 00:18:41.973 "name": "BaseBdev2", 00:18:41.973 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:41.973 "is_configured": true, 00:18:41.973 "data_offset": 256, 00:18:41.973 "data_size": 7936 00:18:41.973 } 00:18:41.973 ] 00:18:41.973 }' 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:41.973 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.232 "name": "raid_bdev1", 00:18:42.232 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:42.232 "strip_size_kb": 0, 00:18:42.232 "state": "online", 00:18:42.232 "raid_level": "raid1", 00:18:42.232 "superblock": true, 00:18:42.232 "num_base_bdevs": 2, 00:18:42.232 "num_base_bdevs_discovered": 2, 00:18:42.232 "num_base_bdevs_operational": 2, 00:18:42.232 "base_bdevs_list": [ 00:18:42.232 { 00:18:42.232 "name": "spare", 00:18:42.232 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:42.232 "is_configured": true, 00:18:42.232 "data_offset": 256, 00:18:42.232 "data_size": 7936 00:18:42.232 }, 00:18:42.232 { 00:18:42.232 "name": "BaseBdev2", 00:18:42.232 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:42.232 "is_configured": true, 00:18:42.232 "data_offset": 256, 00:18:42.232 "data_size": 7936 00:18:42.232 } 00:18:42.232 ] 00:18:42.232 }' 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.232 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.233 "name": "raid_bdev1", 00:18:42.233 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:42.233 "strip_size_kb": 0, 00:18:42.233 "state": "online", 00:18:42.233 "raid_level": "raid1", 00:18:42.233 "superblock": true, 00:18:42.233 "num_base_bdevs": 2, 00:18:42.233 "num_base_bdevs_discovered": 2, 00:18:42.233 "num_base_bdevs_operational": 2, 00:18:42.233 "base_bdevs_list": [ 00:18:42.233 { 00:18:42.233 "name": "spare", 00:18:42.233 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:42.233 "is_configured": true, 00:18:42.233 "data_offset": 256, 00:18:42.233 "data_size": 7936 00:18:42.233 }, 00:18:42.233 { 00:18:42.233 "name": "BaseBdev2", 00:18:42.233 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:42.233 "is_configured": true, 00:18:42.233 "data_offset": 256, 00:18:42.233 "data_size": 7936 00:18:42.233 } 00:18:42.233 ] 00:18:42.233 }' 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.233 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.802 [2024-12-13 08:29:54.956816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.802 [2024-12-13 08:29:54.956908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.802 [2024-12-13 08:29:54.957038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.802 [2024-12-13 08:29:54.957143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.802 [2024-12-13 08:29:54.957199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.802 08:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.802 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:43.061 /dev/nbd0 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:43.061 1+0 records in 00:18:43.061 1+0 records out 00:18:43.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361582 s, 11.3 MB/s 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:43.061 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:43.319 /dev/nbd1 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:43.319 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:43.319 1+0 records in 00:18:43.319 1+0 records out 00:18:43.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404071 s, 10.1 MB/s 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.320 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.578 08:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.837 [2024-12-13 08:29:56.118521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:43.837 [2024-12-13 08:29:56.118623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.837 [2024-12-13 08:29:56.118652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:43.837 [2024-12-13 08:29:56.118661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.837 [2024-12-13 08:29:56.120741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.837 [2024-12-13 08:29:56.120782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:43.837 [2024-12-13 08:29:56.120855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:43.837 [2024-12-13 08:29:56.120917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.837 [2024-12-13 08:29:56.121078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.837 spare 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.837 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.096 [2024-12-13 08:29:56.221010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:44.096 [2024-12-13 08:29:56.221173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:44.096 [2024-12-13 08:29:56.221320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:44.096 [2024-12-13 08:29:56.221514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:44.096 [2024-12-13 08:29:56.221524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:44.096 [2024-12-13 08:29:56.221673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.096 "name": "raid_bdev1", 00:18:44.096 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:44.096 "strip_size_kb": 0, 00:18:44.096 "state": "online", 00:18:44.096 "raid_level": "raid1", 00:18:44.096 "superblock": true, 00:18:44.096 "num_base_bdevs": 2, 00:18:44.096 "num_base_bdevs_discovered": 2, 00:18:44.096 "num_base_bdevs_operational": 2, 00:18:44.096 "base_bdevs_list": [ 00:18:44.096 { 00:18:44.096 "name": "spare", 00:18:44.096 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:44.096 "is_configured": true, 00:18:44.096 "data_offset": 256, 00:18:44.096 "data_size": 7936 00:18:44.096 }, 00:18:44.096 { 00:18:44.096 "name": "BaseBdev2", 00:18:44.096 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:44.096 "is_configured": true, 00:18:44.096 "data_offset": 256, 00:18:44.096 "data_size": 7936 00:18:44.096 } 00:18:44.096 ] 00:18:44.096 }' 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.096 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.355 "name": "raid_bdev1", 00:18:44.355 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:44.355 "strip_size_kb": 0, 00:18:44.355 "state": "online", 00:18:44.355 "raid_level": "raid1", 00:18:44.355 "superblock": true, 00:18:44.355 "num_base_bdevs": 2, 00:18:44.355 "num_base_bdevs_discovered": 2, 00:18:44.355 "num_base_bdevs_operational": 2, 00:18:44.355 "base_bdevs_list": [ 00:18:44.355 { 00:18:44.355 "name": "spare", 00:18:44.355 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:44.355 "is_configured": true, 00:18:44.355 "data_offset": 256, 00:18:44.355 "data_size": 7936 00:18:44.355 }, 00:18:44.355 { 00:18:44.355 "name": "BaseBdev2", 00:18:44.355 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:44.355 "is_configured": true, 00:18:44.355 "data_offset": 256, 00:18:44.355 "data_size": 7936 00:18:44.355 } 00:18:44.355 ] 00:18:44.355 }' 00:18:44.355 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.614 [2024-12-13 08:29:56.853311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.614 "name": "raid_bdev1", 00:18:44.614 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:44.614 "strip_size_kb": 0, 00:18:44.614 "state": "online", 00:18:44.614 "raid_level": "raid1", 00:18:44.614 "superblock": true, 00:18:44.614 "num_base_bdevs": 2, 00:18:44.614 "num_base_bdevs_discovered": 1, 00:18:44.614 "num_base_bdevs_operational": 1, 00:18:44.614 "base_bdevs_list": [ 00:18:44.614 { 00:18:44.614 "name": null, 00:18:44.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.614 "is_configured": false, 00:18:44.614 "data_offset": 0, 00:18:44.614 "data_size": 7936 00:18:44.614 }, 00:18:44.614 { 00:18:44.614 "name": "BaseBdev2", 00:18:44.614 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:44.614 "is_configured": true, 00:18:44.614 "data_offset": 256, 00:18:44.614 "data_size": 7936 00:18:44.614 } 00:18:44.614 ] 00:18:44.614 }' 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.614 08:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.181 08:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:45.181 08:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.181 08:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.181 [2024-12-13 08:29:57.320551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.181 [2024-12-13 08:29:57.320811] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.181 [2024-12-13 08:29:57.320874] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:45.181 [2024-12-13 08:29:57.320985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.181 [2024-12-13 08:29:57.334665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:45.181 08:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.181 08:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:45.181 [2024-12-13 08:29:57.336502] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.132 "name": "raid_bdev1", 00:18:46.132 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:46.132 "strip_size_kb": 0, 00:18:46.132 "state": "online", 00:18:46.132 "raid_level": "raid1", 00:18:46.132 "superblock": true, 00:18:46.132 "num_base_bdevs": 2, 00:18:46.132 "num_base_bdevs_discovered": 2, 00:18:46.132 "num_base_bdevs_operational": 2, 00:18:46.132 "process": { 00:18:46.132 "type": "rebuild", 00:18:46.132 "target": "spare", 00:18:46.132 "progress": { 00:18:46.132 "blocks": 2560, 00:18:46.132 "percent": 32 00:18:46.132 } 00:18:46.132 }, 00:18:46.132 "base_bdevs_list": [ 00:18:46.132 { 00:18:46.132 "name": "spare", 00:18:46.132 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:46.132 "is_configured": true, 00:18:46.132 "data_offset": 256, 00:18:46.132 "data_size": 7936 00:18:46.132 }, 00:18:46.132 { 00:18:46.132 "name": "BaseBdev2", 00:18:46.132 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:46.132 "is_configured": true, 00:18:46.132 "data_offset": 256, 00:18:46.132 "data_size": 7936 00:18:46.132 } 00:18:46.132 ] 00:18:46.132 }' 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.132 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.391 [2024-12-13 08:29:58.504710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.391 [2024-12-13 08:29:58.541689] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:46.391 [2024-12-13 08:29:58.541766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.391 [2024-12-13 08:29:58.541780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.391 [2024-12-13 08:29:58.541799] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.391 "name": "raid_bdev1", 00:18:46.391 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:46.391 "strip_size_kb": 0, 00:18:46.391 "state": "online", 00:18:46.391 "raid_level": "raid1", 00:18:46.391 "superblock": true, 00:18:46.391 "num_base_bdevs": 2, 00:18:46.391 "num_base_bdevs_discovered": 1, 00:18:46.391 "num_base_bdevs_operational": 1, 00:18:46.391 "base_bdevs_list": [ 00:18:46.391 { 00:18:46.391 "name": null, 00:18:46.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.391 "is_configured": false, 00:18:46.391 "data_offset": 0, 00:18:46.391 "data_size": 7936 00:18:46.391 }, 00:18:46.391 { 00:18:46.391 "name": "BaseBdev2", 00:18:46.391 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:46.391 "is_configured": true, 00:18:46.391 "data_offset": 256, 00:18:46.391 "data_size": 7936 00:18:46.391 } 00:18:46.391 ] 00:18:46.391 }' 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.391 08:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.649 08:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.649 08:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.649 08:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.649 [2024-12-13 08:29:59.012815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.649 [2024-12-13 08:29:59.012952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.649 [2024-12-13 08:29:59.012999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:46.649 [2024-12-13 08:29:59.013032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.649 [2024-12-13 08:29:59.013327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.649 [2024-12-13 08:29:59.013387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.649 [2024-12-13 08:29:59.013482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.650 [2024-12-13 08:29:59.013525] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:46.650 [2024-12-13 08:29:59.013592] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:46.650 [2024-12-13 08:29:59.013656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.908 [2024-12-13 08:29:59.027483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:46.908 spare 00:18:46.908 08:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.908 [2024-12-13 08:29:59.029372] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:46.908 08:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.843 "name": "raid_bdev1", 00:18:47.843 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:47.843 "strip_size_kb": 0, 00:18:47.843 "state": "online", 00:18:47.843 "raid_level": "raid1", 00:18:47.843 "superblock": true, 00:18:47.843 "num_base_bdevs": 2, 00:18:47.843 "num_base_bdevs_discovered": 2, 00:18:47.843 "num_base_bdevs_operational": 2, 00:18:47.843 "process": { 00:18:47.843 "type": "rebuild", 00:18:47.843 "target": "spare", 00:18:47.843 "progress": { 00:18:47.843 "blocks": 2560, 00:18:47.843 "percent": 32 00:18:47.843 } 00:18:47.843 }, 00:18:47.843 "base_bdevs_list": [ 00:18:47.843 { 00:18:47.843 "name": "spare", 00:18:47.843 "uuid": "b7ead580-63a1-5fc3-be9d-5c76dec7610e", 00:18:47.843 "is_configured": true, 00:18:47.843 "data_offset": 256, 00:18:47.843 "data_size": 7936 00:18:47.843 }, 00:18:47.843 { 00:18:47.843 "name": "BaseBdev2", 00:18:47.843 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:47.843 "is_configured": true, 00:18:47.843 "data_offset": 256, 00:18:47.843 "data_size": 7936 00:18:47.843 } 00:18:47.843 ] 00:18:47.843 }' 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.843 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.843 [2024-12-13 08:30:00.165695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.102 [2024-12-13 08:30:00.234748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:48.102 [2024-12-13 08:30:00.234826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.102 [2024-12-13 08:30:00.234844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.102 [2024-12-13 08:30:00.234850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.102 "name": "raid_bdev1", 00:18:48.102 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:48.102 "strip_size_kb": 0, 00:18:48.102 "state": "online", 00:18:48.102 "raid_level": "raid1", 00:18:48.102 "superblock": true, 00:18:48.102 "num_base_bdevs": 2, 00:18:48.102 "num_base_bdevs_discovered": 1, 00:18:48.102 "num_base_bdevs_operational": 1, 00:18:48.102 "base_bdevs_list": [ 00:18:48.102 { 00:18:48.102 "name": null, 00:18:48.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.102 "is_configured": false, 00:18:48.102 "data_offset": 0, 00:18:48.102 "data_size": 7936 00:18:48.102 }, 00:18:48.102 { 00:18:48.102 "name": "BaseBdev2", 00:18:48.102 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:48.102 "is_configured": true, 00:18:48.102 "data_offset": 256, 00:18:48.102 "data_size": 7936 00:18:48.102 } 00:18:48.102 ] 00:18:48.102 }' 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.102 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.360 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.619 "name": "raid_bdev1", 00:18:48.619 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:48.619 "strip_size_kb": 0, 00:18:48.619 "state": "online", 00:18:48.619 "raid_level": "raid1", 00:18:48.619 "superblock": true, 00:18:48.619 "num_base_bdevs": 2, 00:18:48.619 "num_base_bdevs_discovered": 1, 00:18:48.619 "num_base_bdevs_operational": 1, 00:18:48.619 "base_bdevs_list": [ 00:18:48.619 { 00:18:48.619 "name": null, 00:18:48.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.619 "is_configured": false, 00:18:48.619 "data_offset": 0, 00:18:48.619 "data_size": 7936 00:18:48.619 }, 00:18:48.619 { 00:18:48.619 "name": "BaseBdev2", 00:18:48.619 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:48.619 "is_configured": true, 00:18:48.619 "data_offset": 256, 00:18:48.619 "data_size": 7936 00:18:48.619 } 00:18:48.619 ] 00:18:48.619 }' 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.619 [2024-12-13 08:30:00.838472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:48.619 [2024-12-13 08:30:00.838532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.619 [2024-12-13 08:30:00.838554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:48.619 [2024-12-13 08:30:00.838563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.619 [2024-12-13 08:30:00.838787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.619 [2024-12-13 08:30:00.838798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:48.619 [2024-12-13 08:30:00.838850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:48.619 [2024-12-13 08:30:00.838863] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.619 [2024-12-13 08:30:00.838875] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:48.619 [2024-12-13 08:30:00.838889] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:48.619 BaseBdev1 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.619 08:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.554 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.555 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.555 "name": "raid_bdev1", 00:18:49.555 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:49.555 "strip_size_kb": 0, 00:18:49.555 "state": "online", 00:18:49.555 "raid_level": "raid1", 00:18:49.555 "superblock": true, 00:18:49.555 "num_base_bdevs": 2, 00:18:49.555 "num_base_bdevs_discovered": 1, 00:18:49.555 "num_base_bdevs_operational": 1, 00:18:49.555 "base_bdevs_list": [ 00:18:49.555 { 00:18:49.555 "name": null, 00:18:49.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.555 "is_configured": false, 00:18:49.555 "data_offset": 0, 00:18:49.555 "data_size": 7936 00:18:49.555 }, 00:18:49.555 { 00:18:49.555 "name": "BaseBdev2", 00:18:49.555 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:49.555 "is_configured": true, 00:18:49.555 "data_offset": 256, 00:18:49.555 "data_size": 7936 00:18:49.555 } 00:18:49.555 ] 00:18:49.555 }' 00:18:49.555 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.555 08:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.122 "name": "raid_bdev1", 00:18:50.122 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:50.122 "strip_size_kb": 0, 00:18:50.122 "state": "online", 00:18:50.122 "raid_level": "raid1", 00:18:50.122 "superblock": true, 00:18:50.122 "num_base_bdevs": 2, 00:18:50.122 "num_base_bdevs_discovered": 1, 00:18:50.122 "num_base_bdevs_operational": 1, 00:18:50.122 "base_bdevs_list": [ 00:18:50.122 { 00:18:50.122 "name": null, 00:18:50.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.122 "is_configured": false, 00:18:50.122 "data_offset": 0, 00:18:50.122 "data_size": 7936 00:18:50.122 }, 00:18:50.122 { 00:18:50.122 "name": "BaseBdev2", 00:18:50.122 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:50.122 "is_configured": true, 00:18:50.122 "data_offset": 256, 00:18:50.122 "data_size": 7936 00:18:50.122 } 00:18:50.122 ] 00:18:50.122 }' 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.122 [2024-12-13 08:30:02.475723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.122 [2024-12-13 08:30:02.475896] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.122 [2024-12-13 08:30:02.475915] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:50.122 request: 00:18:50.122 { 00:18:50.122 "base_bdev": "BaseBdev1", 00:18:50.122 "raid_bdev": "raid_bdev1", 00:18:50.122 "method": "bdev_raid_add_base_bdev", 00:18:50.122 "req_id": 1 00:18:50.122 } 00:18:50.122 Got JSON-RPC error response 00:18:50.122 response: 00:18:50.122 { 00:18:50.122 "code": -22, 00:18:50.122 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:50.122 } 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:50.122 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.123 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.123 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.123 08:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.499 "name": "raid_bdev1", 00:18:51.499 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:51.499 "strip_size_kb": 0, 00:18:51.499 "state": "online", 00:18:51.499 "raid_level": "raid1", 00:18:51.499 "superblock": true, 00:18:51.499 "num_base_bdevs": 2, 00:18:51.499 "num_base_bdevs_discovered": 1, 00:18:51.499 "num_base_bdevs_operational": 1, 00:18:51.499 "base_bdevs_list": [ 00:18:51.499 { 00:18:51.499 "name": null, 00:18:51.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.499 "is_configured": false, 00:18:51.499 "data_offset": 0, 00:18:51.499 "data_size": 7936 00:18:51.499 }, 00:18:51.499 { 00:18:51.499 "name": "BaseBdev2", 00:18:51.499 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:51.499 "is_configured": true, 00:18:51.499 "data_offset": 256, 00:18:51.499 "data_size": 7936 00:18:51.499 } 00:18:51.499 ] 00:18:51.499 }' 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.499 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.758 "name": "raid_bdev1", 00:18:51.758 "uuid": "8bbad696-e577-4d92-9844-145c2e76c660", 00:18:51.758 "strip_size_kb": 0, 00:18:51.758 "state": "online", 00:18:51.758 "raid_level": "raid1", 00:18:51.758 "superblock": true, 00:18:51.758 "num_base_bdevs": 2, 00:18:51.758 "num_base_bdevs_discovered": 1, 00:18:51.758 "num_base_bdevs_operational": 1, 00:18:51.758 "base_bdevs_list": [ 00:18:51.758 { 00:18:51.758 "name": null, 00:18:51.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.758 "is_configured": false, 00:18:51.758 "data_offset": 0, 00:18:51.758 "data_size": 7936 00:18:51.758 }, 00:18:51.758 { 00:18:51.758 "name": "BaseBdev2", 00:18:51.758 "uuid": "b702a4fb-4b45-5cf7-9bc1-d6ca71945134", 00:18:51.758 "is_configured": true, 00:18:51.758 "data_offset": 256, 00:18:51.758 "data_size": 7936 00:18:51.758 } 00:18:51.758 ] 00:18:51.758 }' 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.758 08:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87932 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87932 ']' 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87932 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87932 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.758 killing process with pid 87932 00:18:51.758 Received shutdown signal, test time was about 60.000000 seconds 00:18:51.758 00:18:51.758 Latency(us) 00:18:51.758 [2024-12-13T08:30:04.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.758 [2024-12-13T08:30:04.123Z] =================================================================================================================== 00:18:51.758 [2024-12-13T08:30:04.123Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87932' 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87932 00:18:51.758 [2024-12-13 08:30:04.066840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.758 [2024-12-13 08:30:04.066961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.758 08:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87932 00:18:51.758 [2024-12-13 08:30:04.067011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.758 [2024-12-13 08:30:04.067023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:52.326 [2024-12-13 08:30:04.385095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.260 08:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:53.261 00:18:53.261 real 0m19.711s 00:18:53.261 user 0m25.789s 00:18:53.261 sys 0m2.565s 00:18:53.261 08:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.261 ************************************ 00:18:53.261 END TEST raid_rebuild_test_sb_md_separate 00:18:53.261 ************************************ 00:18:53.261 08:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.261 08:30:05 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:53.261 08:30:05 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:53.261 08:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:53.261 08:30:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.261 08:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.261 ************************************ 00:18:53.261 START TEST raid_state_function_test_sb_md_interleaved 00:18:53.261 ************************************ 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88624 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88624' 00:18:53.261 Process raid pid: 88624 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88624 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88624 ']' 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.261 08:30:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.518 [2024-12-13 08:30:05.663971] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:53.518 [2024-12-13 08:30:05.664190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.518 [2024-12-13 08:30:05.837965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.775 [2024-12-13 08:30:05.957268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.033 [2024-12-13 08:30:06.158755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.033 [2024-12-13 08:30:06.158795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.292 [2024-12-13 08:30:06.503018] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.292 [2024-12-13 08:30:06.503140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.292 [2024-12-13 08:30:06.503155] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.292 [2024-12-13 08:30:06.503181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.292 "name": "Existed_Raid", 00:18:54.292 "uuid": "399db9aa-37db-4c44-a08f-ef92086d0f34", 00:18:54.292 "strip_size_kb": 0, 00:18:54.292 "state": "configuring", 00:18:54.292 "raid_level": "raid1", 00:18:54.292 "superblock": true, 00:18:54.292 "num_base_bdevs": 2, 00:18:54.292 "num_base_bdevs_discovered": 0, 00:18:54.292 "num_base_bdevs_operational": 2, 00:18:54.292 "base_bdevs_list": [ 00:18:54.292 { 00:18:54.292 "name": "BaseBdev1", 00:18:54.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.292 "is_configured": false, 00:18:54.292 "data_offset": 0, 00:18:54.292 "data_size": 0 00:18:54.292 }, 00:18:54.292 { 00:18:54.292 "name": "BaseBdev2", 00:18:54.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.292 "is_configured": false, 00:18:54.292 "data_offset": 0, 00:18:54.292 "data_size": 0 00:18:54.292 } 00:18:54.292 ] 00:18:54.292 }' 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.292 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.860 [2024-12-13 08:30:06.982184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:54.860 [2024-12-13 08:30:06.982268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.860 [2024-12-13 08:30:06.990134] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.860 [2024-12-13 08:30:06.990229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.860 [2024-12-13 08:30:06.990257] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.860 [2024-12-13 08:30:06.990283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.860 08:30:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.860 [2024-12-13 08:30:07.038295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.860 BaseBdev1 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.860 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.860 [ 00:18:54.860 { 00:18:54.860 "name": "BaseBdev1", 00:18:54.860 "aliases": [ 00:18:54.860 "98efed9c-dc86-4ba3-991e-cb5b54b88a84" 00:18:54.860 ], 00:18:54.860 "product_name": "Malloc disk", 00:18:54.860 "block_size": 4128, 00:18:54.860 "num_blocks": 8192, 00:18:54.860 "uuid": "98efed9c-dc86-4ba3-991e-cb5b54b88a84", 00:18:54.860 "md_size": 32, 00:18:54.860 "md_interleave": true, 00:18:54.860 "dif_type": 0, 00:18:54.860 "assigned_rate_limits": { 00:18:54.860 "rw_ios_per_sec": 0, 00:18:54.860 "rw_mbytes_per_sec": 0, 00:18:54.860 "r_mbytes_per_sec": 0, 00:18:54.860 "w_mbytes_per_sec": 0 00:18:54.860 }, 00:18:54.860 "claimed": true, 00:18:54.860 "claim_type": "exclusive_write", 00:18:54.860 "zoned": false, 00:18:54.860 "supported_io_types": { 00:18:54.860 "read": true, 00:18:54.860 "write": true, 00:18:54.860 "unmap": true, 00:18:54.860 "flush": true, 00:18:54.860 "reset": true, 00:18:54.860 "nvme_admin": false, 00:18:54.860 "nvme_io": false, 00:18:54.860 "nvme_io_md": false, 00:18:54.860 "write_zeroes": true, 00:18:54.860 "zcopy": true, 00:18:54.860 "get_zone_info": false, 00:18:54.860 "zone_management": false, 00:18:54.860 "zone_append": false, 00:18:54.860 "compare": false, 00:18:54.860 "compare_and_write": false, 00:18:54.860 "abort": true, 00:18:54.860 "seek_hole": false, 00:18:54.860 "seek_data": false, 00:18:54.860 "copy": true, 00:18:54.860 "nvme_iov_md": false 00:18:54.860 }, 00:18:54.860 "memory_domains": [ 00:18:54.860 { 00:18:54.860 "dma_device_id": "system", 00:18:54.860 "dma_device_type": 1 00:18:54.860 }, 00:18:54.860 { 00:18:54.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.860 "dma_device_type": 2 00:18:54.860 } 00:18:54.860 ], 00:18:54.860 "driver_specific": {} 00:18:54.860 } 00:18:54.860 ] 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.861 "name": "Existed_Raid", 00:18:54.861 "uuid": "e2d56689-1504-4edc-8a05-a1022cf38949", 00:18:54.861 "strip_size_kb": 0, 00:18:54.861 "state": "configuring", 00:18:54.861 "raid_level": "raid1", 00:18:54.861 "superblock": true, 00:18:54.861 "num_base_bdevs": 2, 00:18:54.861 "num_base_bdevs_discovered": 1, 00:18:54.861 "num_base_bdevs_operational": 2, 00:18:54.861 "base_bdevs_list": [ 00:18:54.861 { 00:18:54.861 "name": "BaseBdev1", 00:18:54.861 "uuid": "98efed9c-dc86-4ba3-991e-cb5b54b88a84", 00:18:54.861 "is_configured": true, 00:18:54.861 "data_offset": 256, 00:18:54.861 "data_size": 7936 00:18:54.861 }, 00:18:54.861 { 00:18:54.861 "name": "BaseBdev2", 00:18:54.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.861 "is_configured": false, 00:18:54.861 "data_offset": 0, 00:18:54.861 "data_size": 0 00:18:54.861 } 00:18:54.861 ] 00:18:54.861 }' 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.861 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.428 [2024-12-13 08:30:07.509574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.428 [2024-12-13 08:30:07.509629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.428 [2024-12-13 08:30:07.521579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.428 [2024-12-13 08:30:07.523406] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.428 [2024-12-13 08:30:07.523451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.428 "name": "Existed_Raid", 00:18:55.428 "uuid": "2b628c2e-13ca-4015-b502-f5b81685004d", 00:18:55.428 "strip_size_kb": 0, 00:18:55.428 "state": "configuring", 00:18:55.428 "raid_level": "raid1", 00:18:55.428 "superblock": true, 00:18:55.428 "num_base_bdevs": 2, 00:18:55.428 "num_base_bdevs_discovered": 1, 00:18:55.428 "num_base_bdevs_operational": 2, 00:18:55.428 "base_bdevs_list": [ 00:18:55.428 { 00:18:55.428 "name": "BaseBdev1", 00:18:55.428 "uuid": "98efed9c-dc86-4ba3-991e-cb5b54b88a84", 00:18:55.428 "is_configured": true, 00:18:55.428 "data_offset": 256, 00:18:55.428 "data_size": 7936 00:18:55.428 }, 00:18:55.428 { 00:18:55.428 "name": "BaseBdev2", 00:18:55.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.428 "is_configured": false, 00:18:55.428 "data_offset": 0, 00:18:55.428 "data_size": 0 00:18:55.428 } 00:18:55.428 ] 00:18:55.428 }' 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.428 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.687 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:55.687 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.687 08:30:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.687 [2024-12-13 08:30:08.023148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.687 [2024-12-13 08:30:08.023551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:55.687 [2024-12-13 08:30:08.023611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:55.687 [2024-12-13 08:30:08.023734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:55.687 [2024-12-13 08:30:08.023858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:55.687 [2024-12-13 08:30:08.023901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:55.687 [2024-12-13 08:30:08.024035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.687 BaseBdev2 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.687 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.946 [ 00:18:55.946 { 00:18:55.946 "name": "BaseBdev2", 00:18:55.946 "aliases": [ 00:18:55.946 "4fc25d41-7e16-4250-94cc-bb573f8ebf4b" 00:18:55.946 ], 00:18:55.946 "product_name": "Malloc disk", 00:18:55.946 "block_size": 4128, 00:18:55.946 "num_blocks": 8192, 00:18:55.946 "uuid": "4fc25d41-7e16-4250-94cc-bb573f8ebf4b", 00:18:55.946 "md_size": 32, 00:18:55.946 "md_interleave": true, 00:18:55.946 "dif_type": 0, 00:18:55.946 "assigned_rate_limits": { 00:18:55.946 "rw_ios_per_sec": 0, 00:18:55.946 "rw_mbytes_per_sec": 0, 00:18:55.946 "r_mbytes_per_sec": 0, 00:18:55.946 "w_mbytes_per_sec": 0 00:18:55.946 }, 00:18:55.946 "claimed": true, 00:18:55.946 "claim_type": "exclusive_write", 00:18:55.946 "zoned": false, 00:18:55.946 "supported_io_types": { 00:18:55.946 "read": true, 00:18:55.946 "write": true, 00:18:55.946 "unmap": true, 00:18:55.946 "flush": true, 00:18:55.946 "reset": true, 00:18:55.946 "nvme_admin": false, 00:18:55.946 "nvme_io": false, 00:18:55.946 "nvme_io_md": false, 00:18:55.946 "write_zeroes": true, 00:18:55.946 "zcopy": true, 00:18:55.946 "get_zone_info": false, 00:18:55.946 "zone_management": false, 00:18:55.946 "zone_append": false, 00:18:55.946 "compare": false, 00:18:55.946 "compare_and_write": false, 00:18:55.946 "abort": true, 00:18:55.946 "seek_hole": false, 00:18:55.946 "seek_data": false, 00:18:55.946 "copy": true, 00:18:55.946 "nvme_iov_md": false 00:18:55.946 }, 00:18:55.946 "memory_domains": [ 00:18:55.946 { 00:18:55.946 "dma_device_id": "system", 00:18:55.946 "dma_device_type": 1 00:18:55.946 }, 00:18:55.946 { 00:18:55.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.946 "dma_device_type": 2 00:18:55.946 } 00:18:55.946 ], 00:18:55.946 "driver_specific": {} 00:18:55.946 } 00:18:55.946 ] 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.946 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.946 "name": "Existed_Raid", 00:18:55.946 "uuid": "2b628c2e-13ca-4015-b502-f5b81685004d", 00:18:55.946 "strip_size_kb": 0, 00:18:55.946 "state": "online", 00:18:55.946 "raid_level": "raid1", 00:18:55.946 "superblock": true, 00:18:55.946 "num_base_bdevs": 2, 00:18:55.946 "num_base_bdevs_discovered": 2, 00:18:55.946 "num_base_bdevs_operational": 2, 00:18:55.946 "base_bdevs_list": [ 00:18:55.946 { 00:18:55.946 "name": "BaseBdev1", 00:18:55.946 "uuid": "98efed9c-dc86-4ba3-991e-cb5b54b88a84", 00:18:55.946 "is_configured": true, 00:18:55.946 "data_offset": 256, 00:18:55.946 "data_size": 7936 00:18:55.946 }, 00:18:55.946 { 00:18:55.947 "name": "BaseBdev2", 00:18:55.947 "uuid": "4fc25d41-7e16-4250-94cc-bb573f8ebf4b", 00:18:55.947 "is_configured": true, 00:18:55.947 "data_offset": 256, 00:18:55.947 "data_size": 7936 00:18:55.947 } 00:18:55.947 ] 00:18:55.947 }' 00:18:55.947 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.947 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.205 [2024-12-13 08:30:08.494737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.205 "name": "Existed_Raid", 00:18:56.205 "aliases": [ 00:18:56.205 "2b628c2e-13ca-4015-b502-f5b81685004d" 00:18:56.205 ], 00:18:56.205 "product_name": "Raid Volume", 00:18:56.205 "block_size": 4128, 00:18:56.205 "num_blocks": 7936, 00:18:56.205 "uuid": "2b628c2e-13ca-4015-b502-f5b81685004d", 00:18:56.205 "md_size": 32, 00:18:56.205 "md_interleave": true, 00:18:56.205 "dif_type": 0, 00:18:56.205 "assigned_rate_limits": { 00:18:56.205 "rw_ios_per_sec": 0, 00:18:56.205 "rw_mbytes_per_sec": 0, 00:18:56.205 "r_mbytes_per_sec": 0, 00:18:56.205 "w_mbytes_per_sec": 0 00:18:56.205 }, 00:18:56.205 "claimed": false, 00:18:56.205 "zoned": false, 00:18:56.205 "supported_io_types": { 00:18:56.205 "read": true, 00:18:56.205 "write": true, 00:18:56.205 "unmap": false, 00:18:56.205 "flush": false, 00:18:56.205 "reset": true, 00:18:56.205 "nvme_admin": false, 00:18:56.205 "nvme_io": false, 00:18:56.205 "nvme_io_md": false, 00:18:56.205 "write_zeroes": true, 00:18:56.205 "zcopy": false, 00:18:56.205 "get_zone_info": false, 00:18:56.205 "zone_management": false, 00:18:56.205 "zone_append": false, 00:18:56.205 "compare": false, 00:18:56.205 "compare_and_write": false, 00:18:56.205 "abort": false, 00:18:56.205 "seek_hole": false, 00:18:56.205 "seek_data": false, 00:18:56.205 "copy": false, 00:18:56.205 "nvme_iov_md": false 00:18:56.205 }, 00:18:56.205 "memory_domains": [ 00:18:56.205 { 00:18:56.205 "dma_device_id": "system", 00:18:56.205 "dma_device_type": 1 00:18:56.205 }, 00:18:56.205 { 00:18:56.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.205 "dma_device_type": 2 00:18:56.205 }, 00:18:56.205 { 00:18:56.205 "dma_device_id": "system", 00:18:56.205 "dma_device_type": 1 00:18:56.205 }, 00:18:56.205 { 00:18:56.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.205 "dma_device_type": 2 00:18:56.205 } 00:18:56.205 ], 00:18:56.205 "driver_specific": { 00:18:56.205 "raid": { 00:18:56.205 "uuid": "2b628c2e-13ca-4015-b502-f5b81685004d", 00:18:56.205 "strip_size_kb": 0, 00:18:56.205 "state": "online", 00:18:56.205 "raid_level": "raid1", 00:18:56.205 "superblock": true, 00:18:56.205 "num_base_bdevs": 2, 00:18:56.205 "num_base_bdevs_discovered": 2, 00:18:56.205 "num_base_bdevs_operational": 2, 00:18:56.205 "base_bdevs_list": [ 00:18:56.205 { 00:18:56.205 "name": "BaseBdev1", 00:18:56.205 "uuid": "98efed9c-dc86-4ba3-991e-cb5b54b88a84", 00:18:56.205 "is_configured": true, 00:18:56.205 "data_offset": 256, 00:18:56.205 "data_size": 7936 00:18:56.205 }, 00:18:56.205 { 00:18:56.205 "name": "BaseBdev2", 00:18:56.205 "uuid": "4fc25d41-7e16-4250-94cc-bb573f8ebf4b", 00:18:56.205 "is_configured": true, 00:18:56.205 "data_offset": 256, 00:18:56.205 "data_size": 7936 00:18:56.205 } 00:18:56.205 ] 00:18:56.205 } 00:18:56.205 } 00:18:56.205 }' 00:18:56.205 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:56.464 BaseBdev2' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.464 [2024-12-13 08:30:08.722064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.464 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.723 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.723 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.723 "name": "Existed_Raid", 00:18:56.723 "uuid": "2b628c2e-13ca-4015-b502-f5b81685004d", 00:18:56.723 "strip_size_kb": 0, 00:18:56.723 "state": "online", 00:18:56.723 "raid_level": "raid1", 00:18:56.723 "superblock": true, 00:18:56.723 "num_base_bdevs": 2, 00:18:56.723 "num_base_bdevs_discovered": 1, 00:18:56.723 "num_base_bdevs_operational": 1, 00:18:56.723 "base_bdevs_list": [ 00:18:56.723 { 00:18:56.723 "name": null, 00:18:56.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.723 "is_configured": false, 00:18:56.723 "data_offset": 0, 00:18:56.723 "data_size": 7936 00:18:56.723 }, 00:18:56.723 { 00:18:56.723 "name": "BaseBdev2", 00:18:56.723 "uuid": "4fc25d41-7e16-4250-94cc-bb573f8ebf4b", 00:18:56.723 "is_configured": true, 00:18:56.723 "data_offset": 256, 00:18:56.723 "data_size": 7936 00:18:56.723 } 00:18:56.723 ] 00:18:56.723 }' 00:18:56.723 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.723 08:30:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.981 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.981 [2024-12-13 08:30:09.264829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:56.981 [2024-12-13 08:30:09.264995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.240 [2024-12-13 08:30:09.360270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.240 [2024-12-13 08:30:09.360389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.240 [2024-12-13 08:30:09.360435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88624 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88624 ']' 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88624 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88624 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88624' 00:18:57.240 killing process with pid 88624 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88624 00:18:57.240 [2024-12-13 08:30:09.452482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.240 08:30:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88624 00:18:57.240 [2024-12-13 08:30:09.469470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.619 08:30:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:58.619 00:18:58.619 real 0m5.001s 00:18:58.619 user 0m7.208s 00:18:58.619 sys 0m0.829s 00:18:58.619 08:30:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.619 08:30:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.619 ************************************ 00:18:58.619 END TEST raid_state_function_test_sb_md_interleaved 00:18:58.619 ************************************ 00:18:58.619 08:30:10 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:58.619 08:30:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:58.619 08:30:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.619 08:30:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.619 ************************************ 00:18:58.619 START TEST raid_superblock_test_md_interleaved 00:18:58.619 ************************************ 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88876 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:58.619 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88876 00:18:58.620 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88876 ']' 00:18:58.620 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.620 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.620 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.620 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.620 08:30:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.620 [2024-12-13 08:30:10.726722] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:58.620 [2024-12-13 08:30:10.726952] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88876 ] 00:18:58.620 [2024-12-13 08:30:10.897609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.879 [2024-12-13 08:30:11.016777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.879 [2024-12-13 08:30:11.212041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.879 [2024-12-13 08:30:11.212211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:59.448 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.449 malloc1 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.449 [2024-12-13 08:30:11.631283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:59.449 [2024-12-13 08:30:11.631344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.449 [2024-12-13 08:30:11.631366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:59.449 [2024-12-13 08:30:11.631376] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.449 [2024-12-13 08:30:11.633239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.449 [2024-12-13 08:30:11.633276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:59.449 pt1 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.449 malloc2 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.449 [2024-12-13 08:30:11.687371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:59.449 [2024-12-13 08:30:11.687446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.449 [2024-12-13 08:30:11.687469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:59.449 [2024-12-13 08:30:11.687480] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.449 [2024-12-13 08:30:11.689404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.449 [2024-12-13 08:30:11.689508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:59.449 pt2 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.449 [2024-12-13 08:30:11.699388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.449 [2024-12-13 08:30:11.701248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:59.449 [2024-12-13 08:30:11.701447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:59.449 [2024-12-13 08:30:11.701462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:59.449 [2024-12-13 08:30:11.701558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:59.449 [2024-12-13 08:30:11.701648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:59.449 [2024-12-13 08:30:11.701659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:59.449 [2024-12-13 08:30:11.701735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.449 "name": "raid_bdev1", 00:18:59.449 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:18:59.449 "strip_size_kb": 0, 00:18:59.449 "state": "online", 00:18:59.449 "raid_level": "raid1", 00:18:59.449 "superblock": true, 00:18:59.449 "num_base_bdevs": 2, 00:18:59.449 "num_base_bdevs_discovered": 2, 00:18:59.449 "num_base_bdevs_operational": 2, 00:18:59.449 "base_bdevs_list": [ 00:18:59.449 { 00:18:59.449 "name": "pt1", 00:18:59.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.449 "is_configured": true, 00:18:59.449 "data_offset": 256, 00:18:59.449 "data_size": 7936 00:18:59.449 }, 00:18:59.449 { 00:18:59.449 "name": "pt2", 00:18:59.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.449 "is_configured": true, 00:18:59.449 "data_offset": 256, 00:18:59.449 "data_size": 7936 00:18:59.449 } 00:18:59.449 ] 00:18:59.449 }' 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.449 08:30:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:00.018 [2024-12-13 08:30:12.166818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.018 "name": "raid_bdev1", 00:19:00.018 "aliases": [ 00:19:00.018 "3f588086-e242-43b8-8b0a-76fc8c8b2504" 00:19:00.018 ], 00:19:00.018 "product_name": "Raid Volume", 00:19:00.018 "block_size": 4128, 00:19:00.018 "num_blocks": 7936, 00:19:00.018 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:00.018 "md_size": 32, 00:19:00.018 "md_interleave": true, 00:19:00.018 "dif_type": 0, 00:19:00.018 "assigned_rate_limits": { 00:19:00.018 "rw_ios_per_sec": 0, 00:19:00.018 "rw_mbytes_per_sec": 0, 00:19:00.018 "r_mbytes_per_sec": 0, 00:19:00.018 "w_mbytes_per_sec": 0 00:19:00.018 }, 00:19:00.018 "claimed": false, 00:19:00.018 "zoned": false, 00:19:00.018 "supported_io_types": { 00:19:00.018 "read": true, 00:19:00.018 "write": true, 00:19:00.018 "unmap": false, 00:19:00.018 "flush": false, 00:19:00.018 "reset": true, 00:19:00.018 "nvme_admin": false, 00:19:00.018 "nvme_io": false, 00:19:00.018 "nvme_io_md": false, 00:19:00.018 "write_zeroes": true, 00:19:00.018 "zcopy": false, 00:19:00.018 "get_zone_info": false, 00:19:00.018 "zone_management": false, 00:19:00.018 "zone_append": false, 00:19:00.018 "compare": false, 00:19:00.018 "compare_and_write": false, 00:19:00.018 "abort": false, 00:19:00.018 "seek_hole": false, 00:19:00.018 "seek_data": false, 00:19:00.018 "copy": false, 00:19:00.018 "nvme_iov_md": false 00:19:00.018 }, 00:19:00.018 "memory_domains": [ 00:19:00.018 { 00:19:00.018 "dma_device_id": "system", 00:19:00.018 "dma_device_type": 1 00:19:00.018 }, 00:19:00.018 { 00:19:00.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.018 "dma_device_type": 2 00:19:00.018 }, 00:19:00.018 { 00:19:00.018 "dma_device_id": "system", 00:19:00.018 "dma_device_type": 1 00:19:00.018 }, 00:19:00.018 { 00:19:00.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.018 "dma_device_type": 2 00:19:00.018 } 00:19:00.018 ], 00:19:00.018 "driver_specific": { 00:19:00.018 "raid": { 00:19:00.018 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:00.018 "strip_size_kb": 0, 00:19:00.018 "state": "online", 00:19:00.018 "raid_level": "raid1", 00:19:00.018 "superblock": true, 00:19:00.018 "num_base_bdevs": 2, 00:19:00.018 "num_base_bdevs_discovered": 2, 00:19:00.018 "num_base_bdevs_operational": 2, 00:19:00.018 "base_bdevs_list": [ 00:19:00.018 { 00:19:00.018 "name": "pt1", 00:19:00.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.018 "is_configured": true, 00:19:00.018 "data_offset": 256, 00:19:00.018 "data_size": 7936 00:19:00.018 }, 00:19:00.018 { 00:19:00.018 "name": "pt2", 00:19:00.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.018 "is_configured": true, 00:19:00.018 "data_offset": 256, 00:19:00.018 "data_size": 7936 00:19:00.018 } 00:19:00.018 ] 00:19:00.018 } 00:19:00.018 } 00:19:00.018 }' 00:19:00.018 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:00.019 pt2' 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.019 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.288 [2024-12-13 08:30:12.422390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3f588086-e242-43b8-8b0a-76fc8c8b2504 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 3f588086-e242-43b8-8b0a-76fc8c8b2504 ']' 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.288 [2024-12-13 08:30:12.453996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.288 [2024-12-13 08:30:12.454022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.288 [2024-12-13 08:30:12.454127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.288 [2024-12-13 08:30:12.454207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.288 [2024-12-13 08:30:12.454220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.288 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.289 [2024-12-13 08:30:12.565848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:00.289 [2024-12-13 08:30:12.567741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:00.289 [2024-12-13 08:30:12.567812] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:00.289 [2024-12-13 08:30:12.567871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:00.289 [2024-12-13 08:30:12.567885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.289 [2024-12-13 08:30:12.567896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:00.289 request: 00:19:00.289 { 00:19:00.289 "name": "raid_bdev1", 00:19:00.289 "raid_level": "raid1", 00:19:00.289 "base_bdevs": [ 00:19:00.289 "malloc1", 00:19:00.289 "malloc2" 00:19:00.289 ], 00:19:00.289 "superblock": false, 00:19:00.289 "method": "bdev_raid_create", 00:19:00.289 "req_id": 1 00:19:00.289 } 00:19:00.289 Got JSON-RPC error response 00:19:00.289 response: 00:19:00.289 { 00:19:00.289 "code": -17, 00:19:00.289 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:00.289 } 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.289 [2024-12-13 08:30:12.633732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:00.289 [2024-12-13 08:30:12.633898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.289 [2024-12-13 08:30:12.633943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:00.289 [2024-12-13 08:30:12.633988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.289 [2024-12-13 08:30:12.636161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.289 [2024-12-13 08:30:12.636258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:00.289 [2024-12-13 08:30:12.636350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:00.289 [2024-12-13 08:30:12.636453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:00.289 pt1 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.289 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.562 "name": "raid_bdev1", 00:19:00.562 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:00.562 "strip_size_kb": 0, 00:19:00.562 "state": "configuring", 00:19:00.562 "raid_level": "raid1", 00:19:00.562 "superblock": true, 00:19:00.562 "num_base_bdevs": 2, 00:19:00.562 "num_base_bdevs_discovered": 1, 00:19:00.562 "num_base_bdevs_operational": 2, 00:19:00.562 "base_bdevs_list": [ 00:19:00.562 { 00:19:00.562 "name": "pt1", 00:19:00.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.562 "is_configured": true, 00:19:00.562 "data_offset": 256, 00:19:00.562 "data_size": 7936 00:19:00.562 }, 00:19:00.562 { 00:19:00.562 "name": null, 00:19:00.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.562 "is_configured": false, 00:19:00.562 "data_offset": 256, 00:19:00.562 "data_size": 7936 00:19:00.562 } 00:19:00.562 ] 00:19:00.562 }' 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.562 08:30:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.822 [2024-12-13 08:30:13.136874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:00.822 [2024-12-13 08:30:13.136949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.822 [2024-12-13 08:30:13.136971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:00.822 [2024-12-13 08:30:13.136981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.822 [2024-12-13 08:30:13.137169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.822 [2024-12-13 08:30:13.137188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:00.822 [2024-12-13 08:30:13.137236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:00.822 [2024-12-13 08:30:13.137259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:00.822 [2024-12-13 08:30:13.137360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:00.822 [2024-12-13 08:30:13.137371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:00.822 [2024-12-13 08:30:13.137445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:00.822 [2024-12-13 08:30:13.137513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:00.822 [2024-12-13 08:30:13.137520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:00.822 [2024-12-13 08:30:13.137583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.822 pt2 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.822 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.082 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.082 "name": "raid_bdev1", 00:19:01.082 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:01.082 "strip_size_kb": 0, 00:19:01.082 "state": "online", 00:19:01.082 "raid_level": "raid1", 00:19:01.082 "superblock": true, 00:19:01.082 "num_base_bdevs": 2, 00:19:01.082 "num_base_bdevs_discovered": 2, 00:19:01.082 "num_base_bdevs_operational": 2, 00:19:01.082 "base_bdevs_list": [ 00:19:01.082 { 00:19:01.082 "name": "pt1", 00:19:01.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.082 "is_configured": true, 00:19:01.082 "data_offset": 256, 00:19:01.082 "data_size": 7936 00:19:01.082 }, 00:19:01.082 { 00:19:01.082 "name": "pt2", 00:19:01.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.082 "is_configured": true, 00:19:01.082 "data_offset": 256, 00:19:01.082 "data_size": 7936 00:19:01.082 } 00:19:01.082 ] 00:19:01.082 }' 00:19:01.082 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.082 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.342 [2024-12-13 08:30:13.584390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.342 "name": "raid_bdev1", 00:19:01.342 "aliases": [ 00:19:01.342 "3f588086-e242-43b8-8b0a-76fc8c8b2504" 00:19:01.342 ], 00:19:01.342 "product_name": "Raid Volume", 00:19:01.342 "block_size": 4128, 00:19:01.342 "num_blocks": 7936, 00:19:01.342 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:01.342 "md_size": 32, 00:19:01.342 "md_interleave": true, 00:19:01.342 "dif_type": 0, 00:19:01.342 "assigned_rate_limits": { 00:19:01.342 "rw_ios_per_sec": 0, 00:19:01.342 "rw_mbytes_per_sec": 0, 00:19:01.342 "r_mbytes_per_sec": 0, 00:19:01.342 "w_mbytes_per_sec": 0 00:19:01.342 }, 00:19:01.342 "claimed": false, 00:19:01.342 "zoned": false, 00:19:01.342 "supported_io_types": { 00:19:01.342 "read": true, 00:19:01.342 "write": true, 00:19:01.342 "unmap": false, 00:19:01.342 "flush": false, 00:19:01.342 "reset": true, 00:19:01.342 "nvme_admin": false, 00:19:01.342 "nvme_io": false, 00:19:01.342 "nvme_io_md": false, 00:19:01.342 "write_zeroes": true, 00:19:01.342 "zcopy": false, 00:19:01.342 "get_zone_info": false, 00:19:01.342 "zone_management": false, 00:19:01.342 "zone_append": false, 00:19:01.342 "compare": false, 00:19:01.342 "compare_and_write": false, 00:19:01.342 "abort": false, 00:19:01.342 "seek_hole": false, 00:19:01.342 "seek_data": false, 00:19:01.342 "copy": false, 00:19:01.342 "nvme_iov_md": false 00:19:01.342 }, 00:19:01.342 "memory_domains": [ 00:19:01.342 { 00:19:01.342 "dma_device_id": "system", 00:19:01.342 "dma_device_type": 1 00:19:01.342 }, 00:19:01.342 { 00:19:01.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.342 "dma_device_type": 2 00:19:01.342 }, 00:19:01.342 { 00:19:01.342 "dma_device_id": "system", 00:19:01.342 "dma_device_type": 1 00:19:01.342 }, 00:19:01.342 { 00:19:01.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.342 "dma_device_type": 2 00:19:01.342 } 00:19:01.342 ], 00:19:01.342 "driver_specific": { 00:19:01.342 "raid": { 00:19:01.342 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:01.342 "strip_size_kb": 0, 00:19:01.342 "state": "online", 00:19:01.342 "raid_level": "raid1", 00:19:01.342 "superblock": true, 00:19:01.342 "num_base_bdevs": 2, 00:19:01.342 "num_base_bdevs_discovered": 2, 00:19:01.342 "num_base_bdevs_operational": 2, 00:19:01.342 "base_bdevs_list": [ 00:19:01.342 { 00:19:01.342 "name": "pt1", 00:19:01.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.342 "is_configured": true, 00:19:01.342 "data_offset": 256, 00:19:01.342 "data_size": 7936 00:19:01.342 }, 00:19:01.342 { 00:19:01.342 "name": "pt2", 00:19:01.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.342 "is_configured": true, 00:19:01.342 "data_offset": 256, 00:19:01.342 "data_size": 7936 00:19:01.342 } 00:19:01.342 ] 00:19:01.342 } 00:19:01.342 } 00:19:01.342 }' 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:01.342 pt2' 00:19:01.342 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.602 [2024-12-13 08:30:13.827910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 3f588086-e242-43b8-8b0a-76fc8c8b2504 '!=' 3f588086-e242-43b8-8b0a-76fc8c8b2504 ']' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.602 [2024-12-13 08:30:13.859670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.602 "name": "raid_bdev1", 00:19:01.602 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:01.602 "strip_size_kb": 0, 00:19:01.602 "state": "online", 00:19:01.602 "raid_level": "raid1", 00:19:01.602 "superblock": true, 00:19:01.602 "num_base_bdevs": 2, 00:19:01.602 "num_base_bdevs_discovered": 1, 00:19:01.602 "num_base_bdevs_operational": 1, 00:19:01.602 "base_bdevs_list": [ 00:19:01.602 { 00:19:01.602 "name": null, 00:19:01.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.602 "is_configured": false, 00:19:01.602 "data_offset": 0, 00:19:01.602 "data_size": 7936 00:19:01.602 }, 00:19:01.602 { 00:19:01.602 "name": "pt2", 00:19:01.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.602 "is_configured": true, 00:19:01.602 "data_offset": 256, 00:19:01.602 "data_size": 7936 00:19:01.602 } 00:19:01.602 ] 00:19:01.602 }' 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.602 08:30:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.171 [2024-12-13 08:30:14.282973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.171 [2024-12-13 08:30:14.283006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.171 [2024-12-13 08:30:14.283091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.171 [2024-12-13 08:30:14.283153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.171 [2024-12-13 08:30:14.283165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.171 [2024-12-13 08:30:14.338856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.171 [2024-12-13 08:30:14.338953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.171 [2024-12-13 08:30:14.339003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:02.171 [2024-12-13 08:30:14.339033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.171 [2024-12-13 08:30:14.340986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.171 [2024-12-13 08:30:14.341062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.171 [2024-12-13 08:30:14.341145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:02.171 [2024-12-13 08:30:14.341229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.171 [2024-12-13 08:30:14.341305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:02.171 [2024-12-13 08:30:14.341318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:02.171 [2024-12-13 08:30:14.341409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:02.171 [2024-12-13 08:30:14.341476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:02.171 [2024-12-13 08:30:14.341483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:02.171 [2024-12-13 08:30:14.341545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.171 pt2 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.171 "name": "raid_bdev1", 00:19:02.171 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:02.171 "strip_size_kb": 0, 00:19:02.171 "state": "online", 00:19:02.171 "raid_level": "raid1", 00:19:02.171 "superblock": true, 00:19:02.171 "num_base_bdevs": 2, 00:19:02.171 "num_base_bdevs_discovered": 1, 00:19:02.171 "num_base_bdevs_operational": 1, 00:19:02.171 "base_bdevs_list": [ 00:19:02.171 { 00:19:02.171 "name": null, 00:19:02.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.171 "is_configured": false, 00:19:02.171 "data_offset": 256, 00:19:02.171 "data_size": 7936 00:19:02.171 }, 00:19:02.171 { 00:19:02.171 "name": "pt2", 00:19:02.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.171 "is_configured": true, 00:19:02.171 "data_offset": 256, 00:19:02.171 "data_size": 7936 00:19:02.171 } 00:19:02.171 ] 00:19:02.171 }' 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.171 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 [2024-12-13 08:30:14.810047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.741 [2024-12-13 08:30:14.810081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.741 [2024-12-13 08:30:14.810177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.741 [2024-12-13 08:30:14.810231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.741 [2024-12-13 08:30:14.810240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 [2024-12-13 08:30:14.873972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.741 [2024-12-13 08:30:14.874107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.741 [2024-12-13 08:30:14.874150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:02.741 [2024-12-13 08:30:14.874179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.741 [2024-12-13 08:30:14.876229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.741 [2024-12-13 08:30:14.876309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.741 [2024-12-13 08:30:14.876397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:02.741 [2024-12-13 08:30:14.876477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.741 [2024-12-13 08:30:14.876633] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:02.741 [2024-12-13 08:30:14.876695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.741 [2024-12-13 08:30:14.876737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:02.741 [2024-12-13 08:30:14.876867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.741 [2024-12-13 08:30:14.876986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:02.741 [2024-12-13 08:30:14.876999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:02.741 [2024-12-13 08:30:14.877084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:02.741 [2024-12-13 08:30:14.877166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:02.741 [2024-12-13 08:30:14.877178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:02.741 [2024-12-13 08:30:14.877253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.741 pt1 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.741 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.741 "name": "raid_bdev1", 00:19:02.741 "uuid": "3f588086-e242-43b8-8b0a-76fc8c8b2504", 00:19:02.741 "strip_size_kb": 0, 00:19:02.741 "state": "online", 00:19:02.741 "raid_level": "raid1", 00:19:02.742 "superblock": true, 00:19:02.742 "num_base_bdevs": 2, 00:19:02.742 "num_base_bdevs_discovered": 1, 00:19:02.742 "num_base_bdevs_operational": 1, 00:19:02.742 "base_bdevs_list": [ 00:19:02.742 { 00:19:02.742 "name": null, 00:19:02.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.742 "is_configured": false, 00:19:02.742 "data_offset": 256, 00:19:02.742 "data_size": 7936 00:19:02.742 }, 00:19:02.742 { 00:19:02.742 "name": "pt2", 00:19:02.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.742 "is_configured": true, 00:19:02.742 "data_offset": 256, 00:19:02.742 "data_size": 7936 00:19:02.742 } 00:19:02.742 ] 00:19:02.742 }' 00:19:02.742 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.742 08:30:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.001 [2024-12-13 08:30:15.349390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:03.001 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 3f588086-e242-43b8-8b0a-76fc8c8b2504 '!=' 3f588086-e242-43b8-8b0a-76fc8c8b2504 ']' 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88876 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88876 ']' 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88876 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88876 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.260 killing process with pid 88876 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88876' 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88876 00:19:03.260 [2024-12-13 08:30:15.425058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.260 [2024-12-13 08:30:15.425152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.260 [2024-12-13 08:30:15.425203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.260 [2024-12-13 08:30:15.425218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:03.260 08:30:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88876 00:19:03.520 [2024-12-13 08:30:15.633064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.459 ************************************ 00:19:04.459 END TEST raid_superblock_test_md_interleaved 00:19:04.459 ************************************ 00:19:04.459 08:30:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:04.459 00:19:04.459 real 0m6.113s 00:19:04.459 user 0m9.277s 00:19:04.459 sys 0m1.098s 00:19:04.459 08:30:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.459 08:30:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.459 08:30:16 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:04.459 08:30:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:04.459 08:30:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.459 08:30:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.459 ************************************ 00:19:04.459 START TEST raid_rebuild_test_sb_md_interleaved 00:19:04.459 ************************************ 00:19:04.459 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:04.459 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:04.459 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:04.459 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:04.459 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:04.459 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:04.719 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89199 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89199 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89199 ']' 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.720 08:30:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.720 [2024-12-13 08:30:16.918592] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:04.720 [2024-12-13 08:30:16.918793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:04.720 Zero copy mechanism will not be used. 00:19:04.720 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89199 ] 00:19:04.979 [2024-12-13 08:30:17.090316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.979 [2024-12-13 08:30:17.206222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.238 [2024-12-13 08:30:17.404115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.238 [2024-12-13 08:30:17.404214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.498 BaseBdev1_malloc 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.498 [2024-12-13 08:30:17.813672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:05.498 [2024-12-13 08:30:17.813783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.498 [2024-12-13 08:30:17.813824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:05.498 [2024-12-13 08:30:17.813854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.498 [2024-12-13 08:30:17.815751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.498 [2024-12-13 08:30:17.815834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.498 BaseBdev1 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.498 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.758 BaseBdev2_malloc 00:19:05.758 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.758 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:05.758 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.758 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.759 [2024-12-13 08:30:17.868453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:05.759 [2024-12-13 08:30:17.868511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.759 [2024-12-13 08:30:17.868530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:05.759 [2024-12-13 08:30:17.868542] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.759 [2024-12-13 08:30:17.870376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.759 [2024-12-13 08:30:17.870467] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:05.759 BaseBdev2 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.759 spare_malloc 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.759 spare_delay 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.759 [2024-12-13 08:30:17.951566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.759 [2024-12-13 08:30:17.951625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.759 [2024-12-13 08:30:17.951645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:05.759 [2024-12-13 08:30:17.951655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.759 [2024-12-13 08:30:17.953479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.759 [2024-12-13 08:30:17.953519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.759 spare 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.759 [2024-12-13 08:30:17.963586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.759 [2024-12-13 08:30:17.965391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.759 [2024-12-13 08:30:17.965585] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:05.759 [2024-12-13 08:30:17.965602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.759 [2024-12-13 08:30:17.965677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:05.759 [2024-12-13 08:30:17.965743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:05.759 [2024-12-13 08:30:17.965751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:05.759 [2024-12-13 08:30:17.965819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.759 08:30:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.759 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.759 "name": "raid_bdev1", 00:19:05.759 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:05.759 "strip_size_kb": 0, 00:19:05.759 "state": "online", 00:19:05.759 "raid_level": "raid1", 00:19:05.759 "superblock": true, 00:19:05.759 "num_base_bdevs": 2, 00:19:05.759 "num_base_bdevs_discovered": 2, 00:19:05.759 "num_base_bdevs_operational": 2, 00:19:05.759 "base_bdevs_list": [ 00:19:05.759 { 00:19:05.759 "name": "BaseBdev1", 00:19:05.759 "uuid": "763a84aa-4a61-50c8-ae8f-719eabafa0fc", 00:19:05.759 "is_configured": true, 00:19:05.759 "data_offset": 256, 00:19:05.759 "data_size": 7936 00:19:05.759 }, 00:19:05.759 { 00:19:05.759 "name": "BaseBdev2", 00:19:05.759 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:05.759 "is_configured": true, 00:19:05.759 "data_offset": 256, 00:19:05.759 "data_size": 7936 00:19:05.759 } 00:19:05.759 ] 00:19:05.759 }' 00:19:05.759 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.759 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.329 [2024-12-13 08:30:18.419155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.329 [2024-12-13 08:30:18.494706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.329 "name": "raid_bdev1", 00:19:06.329 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:06.329 "strip_size_kb": 0, 00:19:06.329 "state": "online", 00:19:06.329 "raid_level": "raid1", 00:19:06.329 "superblock": true, 00:19:06.329 "num_base_bdevs": 2, 00:19:06.329 "num_base_bdevs_discovered": 1, 00:19:06.329 "num_base_bdevs_operational": 1, 00:19:06.329 "base_bdevs_list": [ 00:19:06.329 { 00:19:06.329 "name": null, 00:19:06.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.329 "is_configured": false, 00:19:06.329 "data_offset": 0, 00:19:06.329 "data_size": 7936 00:19:06.329 }, 00:19:06.329 { 00:19:06.329 "name": "BaseBdev2", 00:19:06.329 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:06.329 "is_configured": true, 00:19:06.329 "data_offset": 256, 00:19:06.329 "data_size": 7936 00:19:06.329 } 00:19:06.329 ] 00:19:06.329 }' 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.329 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.589 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.589 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.589 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.589 [2024-12-13 08:30:18.942000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.849 [2024-12-13 08:30:18.959697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:06.849 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.849 08:30:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:06.849 [2024-12-13 08:30:18.961597] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.788 08:30:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.788 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.788 "name": "raid_bdev1", 00:19:07.788 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:07.788 "strip_size_kb": 0, 00:19:07.788 "state": "online", 00:19:07.788 "raid_level": "raid1", 00:19:07.788 "superblock": true, 00:19:07.788 "num_base_bdevs": 2, 00:19:07.788 "num_base_bdevs_discovered": 2, 00:19:07.788 "num_base_bdevs_operational": 2, 00:19:07.788 "process": { 00:19:07.788 "type": "rebuild", 00:19:07.788 "target": "spare", 00:19:07.788 "progress": { 00:19:07.788 "blocks": 2560, 00:19:07.789 "percent": 32 00:19:07.789 } 00:19:07.789 }, 00:19:07.789 "base_bdevs_list": [ 00:19:07.789 { 00:19:07.789 "name": "spare", 00:19:07.789 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:07.789 "is_configured": true, 00:19:07.789 "data_offset": 256, 00:19:07.789 "data_size": 7936 00:19:07.789 }, 00:19:07.789 { 00:19:07.789 "name": "BaseBdev2", 00:19:07.789 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:07.789 "is_configured": true, 00:19:07.789 "data_offset": 256, 00:19:07.789 "data_size": 7936 00:19:07.789 } 00:19:07.789 ] 00:19:07.789 }' 00:19:07.789 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.789 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.789 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.789 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.789 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:07.789 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.789 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.789 [2024-12-13 08:30:20.077463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.049 [2024-12-13 08:30:20.166998] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:08.049 [2024-12-13 08:30:20.167062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.049 [2024-12-13 08:30:20.167094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.049 [2024-12-13 08:30:20.167107] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.049 "name": "raid_bdev1", 00:19:08.049 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:08.049 "strip_size_kb": 0, 00:19:08.049 "state": "online", 00:19:08.049 "raid_level": "raid1", 00:19:08.049 "superblock": true, 00:19:08.049 "num_base_bdevs": 2, 00:19:08.049 "num_base_bdevs_discovered": 1, 00:19:08.049 "num_base_bdevs_operational": 1, 00:19:08.049 "base_bdevs_list": [ 00:19:08.049 { 00:19:08.049 "name": null, 00:19:08.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.049 "is_configured": false, 00:19:08.049 "data_offset": 0, 00:19:08.049 "data_size": 7936 00:19:08.049 }, 00:19:08.049 { 00:19:08.049 "name": "BaseBdev2", 00:19:08.049 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:08.049 "is_configured": true, 00:19:08.049 "data_offset": 256, 00:19:08.049 "data_size": 7936 00:19:08.049 } 00:19:08.049 ] 00:19:08.049 }' 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.049 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.309 "name": "raid_bdev1", 00:19:08.309 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:08.309 "strip_size_kb": 0, 00:19:08.309 "state": "online", 00:19:08.309 "raid_level": "raid1", 00:19:08.309 "superblock": true, 00:19:08.309 "num_base_bdevs": 2, 00:19:08.309 "num_base_bdevs_discovered": 1, 00:19:08.309 "num_base_bdevs_operational": 1, 00:19:08.309 "base_bdevs_list": [ 00:19:08.309 { 00:19:08.309 "name": null, 00:19:08.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.309 "is_configured": false, 00:19:08.309 "data_offset": 0, 00:19:08.309 "data_size": 7936 00:19:08.309 }, 00:19:08.309 { 00:19:08.309 "name": "BaseBdev2", 00:19:08.309 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:08.309 "is_configured": true, 00:19:08.309 "data_offset": 256, 00:19:08.309 "data_size": 7936 00:19:08.309 } 00:19:08.309 ] 00:19:08.309 }' 00:19:08.309 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.568 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.568 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.568 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.568 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.568 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.568 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.568 [2024-12-13 08:30:20.749127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.569 [2024-12-13 08:30:20.767685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:08.569 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.569 08:30:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:08.569 [2024-12-13 08:30:20.769656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.505 "name": "raid_bdev1", 00:19:09.505 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:09.505 "strip_size_kb": 0, 00:19:09.505 "state": "online", 00:19:09.505 "raid_level": "raid1", 00:19:09.505 "superblock": true, 00:19:09.505 "num_base_bdevs": 2, 00:19:09.505 "num_base_bdevs_discovered": 2, 00:19:09.505 "num_base_bdevs_operational": 2, 00:19:09.505 "process": { 00:19:09.505 "type": "rebuild", 00:19:09.505 "target": "spare", 00:19:09.505 "progress": { 00:19:09.505 "blocks": 2560, 00:19:09.505 "percent": 32 00:19:09.505 } 00:19:09.505 }, 00:19:09.505 "base_bdevs_list": [ 00:19:09.505 { 00:19:09.505 "name": "spare", 00:19:09.505 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:09.505 "is_configured": true, 00:19:09.505 "data_offset": 256, 00:19:09.505 "data_size": 7936 00:19:09.505 }, 00:19:09.505 { 00:19:09.505 "name": "BaseBdev2", 00:19:09.505 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:09.505 "is_configured": true, 00:19:09.505 "data_offset": 256, 00:19:09.505 "data_size": 7936 00:19:09.505 } 00:19:09.505 ] 00:19:09.505 }' 00:19:09.505 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.764 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.764 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.764 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.764 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:09.764 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:09.764 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=742 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.765 "name": "raid_bdev1", 00:19:09.765 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:09.765 "strip_size_kb": 0, 00:19:09.765 "state": "online", 00:19:09.765 "raid_level": "raid1", 00:19:09.765 "superblock": true, 00:19:09.765 "num_base_bdevs": 2, 00:19:09.765 "num_base_bdevs_discovered": 2, 00:19:09.765 "num_base_bdevs_operational": 2, 00:19:09.765 "process": { 00:19:09.765 "type": "rebuild", 00:19:09.765 "target": "spare", 00:19:09.765 "progress": { 00:19:09.765 "blocks": 2816, 00:19:09.765 "percent": 35 00:19:09.765 } 00:19:09.765 }, 00:19:09.765 "base_bdevs_list": [ 00:19:09.765 { 00:19:09.765 "name": "spare", 00:19:09.765 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:09.765 "is_configured": true, 00:19:09.765 "data_offset": 256, 00:19:09.765 "data_size": 7936 00:19:09.765 }, 00:19:09.765 { 00:19:09.765 "name": "BaseBdev2", 00:19:09.765 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:09.765 "is_configured": true, 00:19:09.765 "data_offset": 256, 00:19:09.765 "data_size": 7936 00:19:09.765 } 00:19:09.765 ] 00:19:09.765 }' 00:19:09.765 08:30:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.765 08:30:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.765 08:30:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.765 08:30:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.765 08:30:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.143 "name": "raid_bdev1", 00:19:11.143 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:11.143 "strip_size_kb": 0, 00:19:11.143 "state": "online", 00:19:11.143 "raid_level": "raid1", 00:19:11.143 "superblock": true, 00:19:11.143 "num_base_bdevs": 2, 00:19:11.143 "num_base_bdevs_discovered": 2, 00:19:11.143 "num_base_bdevs_operational": 2, 00:19:11.143 "process": { 00:19:11.143 "type": "rebuild", 00:19:11.143 "target": "spare", 00:19:11.143 "progress": { 00:19:11.143 "blocks": 5888, 00:19:11.143 "percent": 74 00:19:11.143 } 00:19:11.143 }, 00:19:11.143 "base_bdevs_list": [ 00:19:11.143 { 00:19:11.143 "name": "spare", 00:19:11.143 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:11.143 "is_configured": true, 00:19:11.143 "data_offset": 256, 00:19:11.143 "data_size": 7936 00:19:11.143 }, 00:19:11.143 { 00:19:11.143 "name": "BaseBdev2", 00:19:11.143 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:11.143 "is_configured": true, 00:19:11.143 "data_offset": 256, 00:19:11.143 "data_size": 7936 00:19:11.143 } 00:19:11.143 ] 00:19:11.143 }' 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.143 08:30:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.711 [2024-12-13 08:30:23.883245] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:11.711 [2024-12-13 08:30:23.883338] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:11.711 [2024-12-13 08:30:23.883464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.970 "name": "raid_bdev1", 00:19:11.970 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:11.970 "strip_size_kb": 0, 00:19:11.970 "state": "online", 00:19:11.970 "raid_level": "raid1", 00:19:11.970 "superblock": true, 00:19:11.970 "num_base_bdevs": 2, 00:19:11.970 "num_base_bdevs_discovered": 2, 00:19:11.970 "num_base_bdevs_operational": 2, 00:19:11.970 "base_bdevs_list": [ 00:19:11.970 { 00:19:11.970 "name": "spare", 00:19:11.970 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:11.970 "is_configured": true, 00:19:11.970 "data_offset": 256, 00:19:11.970 "data_size": 7936 00:19:11.970 }, 00:19:11.970 { 00:19:11.970 "name": "BaseBdev2", 00:19:11.970 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:11.970 "is_configured": true, 00:19:11.970 "data_offset": 256, 00:19:11.970 "data_size": 7936 00:19:11.970 } 00:19:11.970 ] 00:19:11.970 }' 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:11.970 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.230 "name": "raid_bdev1", 00:19:12.230 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:12.230 "strip_size_kb": 0, 00:19:12.230 "state": "online", 00:19:12.230 "raid_level": "raid1", 00:19:12.230 "superblock": true, 00:19:12.230 "num_base_bdevs": 2, 00:19:12.230 "num_base_bdevs_discovered": 2, 00:19:12.230 "num_base_bdevs_operational": 2, 00:19:12.230 "base_bdevs_list": [ 00:19:12.230 { 00:19:12.230 "name": "spare", 00:19:12.230 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:12.230 "is_configured": true, 00:19:12.230 "data_offset": 256, 00:19:12.230 "data_size": 7936 00:19:12.230 }, 00:19:12.230 { 00:19:12.230 "name": "BaseBdev2", 00:19:12.230 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:12.230 "is_configured": true, 00:19:12.230 "data_offset": 256, 00:19:12.230 "data_size": 7936 00:19:12.230 } 00:19:12.230 ] 00:19:12.230 }' 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.230 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.230 "name": "raid_bdev1", 00:19:12.230 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:12.230 "strip_size_kb": 0, 00:19:12.230 "state": "online", 00:19:12.230 "raid_level": "raid1", 00:19:12.230 "superblock": true, 00:19:12.230 "num_base_bdevs": 2, 00:19:12.230 "num_base_bdevs_discovered": 2, 00:19:12.230 "num_base_bdevs_operational": 2, 00:19:12.230 "base_bdevs_list": [ 00:19:12.230 { 00:19:12.230 "name": "spare", 00:19:12.230 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:12.230 "is_configured": true, 00:19:12.230 "data_offset": 256, 00:19:12.230 "data_size": 7936 00:19:12.230 }, 00:19:12.230 { 00:19:12.230 "name": "BaseBdev2", 00:19:12.230 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:12.230 "is_configured": true, 00:19:12.230 "data_offset": 256, 00:19:12.230 "data_size": 7936 00:19:12.231 } 00:19:12.231 ] 00:19:12.231 }' 00:19:12.231 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.231 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 [2024-12-13 08:30:24.901941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.800 [2024-12-13 08:30:24.902038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.800 [2024-12-13 08:30:24.902179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.800 [2024-12-13 08:30:24.902286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.800 [2024-12-13 08:30:24.902339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 [2024-12-13 08:30:24.961815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.800 [2024-12-13 08:30:24.961941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.800 [2024-12-13 08:30:24.961970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:12.800 [2024-12-13 08:30:24.961981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.800 [2024-12-13 08:30:24.964220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.800 [2024-12-13 08:30:24.964297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.800 [2024-12-13 08:30:24.964388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:12.800 [2024-12-13 08:30:24.964476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.800 [2024-12-13 08:30:24.964636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.800 spare 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.800 08:30:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 [2024-12-13 08:30:25.064587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:12.800 [2024-12-13 08:30:25.064683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:12.800 [2024-12-13 08:30:25.064819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:12.800 [2024-12-13 08:30:25.064928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:12.800 [2024-12-13 08:30:25.064940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:12.800 [2024-12-13 08:30:25.065039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.800 "name": "raid_bdev1", 00:19:12.800 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:12.800 "strip_size_kb": 0, 00:19:12.800 "state": "online", 00:19:12.800 "raid_level": "raid1", 00:19:12.800 "superblock": true, 00:19:12.800 "num_base_bdevs": 2, 00:19:12.800 "num_base_bdevs_discovered": 2, 00:19:12.800 "num_base_bdevs_operational": 2, 00:19:12.800 "base_bdevs_list": [ 00:19:12.800 { 00:19:12.800 "name": "spare", 00:19:12.800 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:12.800 "is_configured": true, 00:19:12.800 "data_offset": 256, 00:19:12.800 "data_size": 7936 00:19:12.800 }, 00:19:12.800 { 00:19:12.800 "name": "BaseBdev2", 00:19:12.800 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:12.800 "is_configured": true, 00:19:12.800 "data_offset": 256, 00:19:12.800 "data_size": 7936 00:19:12.800 } 00:19:12.800 ] 00:19:12.800 }' 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.800 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.369 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.369 "name": "raid_bdev1", 00:19:13.369 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:13.369 "strip_size_kb": 0, 00:19:13.369 "state": "online", 00:19:13.369 "raid_level": "raid1", 00:19:13.369 "superblock": true, 00:19:13.369 "num_base_bdevs": 2, 00:19:13.369 "num_base_bdevs_discovered": 2, 00:19:13.369 "num_base_bdevs_operational": 2, 00:19:13.369 "base_bdevs_list": [ 00:19:13.370 { 00:19:13.370 "name": "spare", 00:19:13.370 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:13.370 "is_configured": true, 00:19:13.370 "data_offset": 256, 00:19:13.370 "data_size": 7936 00:19:13.370 }, 00:19:13.370 { 00:19:13.370 "name": "BaseBdev2", 00:19:13.370 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:13.370 "is_configured": true, 00:19:13.370 "data_offset": 256, 00:19:13.370 "data_size": 7936 00:19:13.370 } 00:19:13.370 ] 00:19:13.370 }' 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.370 [2024-12-13 08:30:25.668707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.370 "name": "raid_bdev1", 00:19:13.370 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:13.370 "strip_size_kb": 0, 00:19:13.370 "state": "online", 00:19:13.370 "raid_level": "raid1", 00:19:13.370 "superblock": true, 00:19:13.370 "num_base_bdevs": 2, 00:19:13.370 "num_base_bdevs_discovered": 1, 00:19:13.370 "num_base_bdevs_operational": 1, 00:19:13.370 "base_bdevs_list": [ 00:19:13.370 { 00:19:13.370 "name": null, 00:19:13.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.370 "is_configured": false, 00:19:13.370 "data_offset": 0, 00:19:13.370 "data_size": 7936 00:19:13.370 }, 00:19:13.370 { 00:19:13.370 "name": "BaseBdev2", 00:19:13.370 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:13.370 "is_configured": true, 00:19:13.370 "data_offset": 256, 00:19:13.370 "data_size": 7936 00:19:13.370 } 00:19:13.370 ] 00:19:13.370 }' 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.370 08:30:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.940 08:30:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:13.940 08:30:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.940 08:30:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.940 [2024-12-13 08:30:26.119947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.940 [2024-12-13 08:30:26.120249] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:13.940 [2024-12-13 08:30:26.120323] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:13.940 [2024-12-13 08:30:26.120383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.940 [2024-12-13 08:30:26.136482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:13.940 08:30:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.940 08:30:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:13.940 [2024-12-13 08:30:26.138499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.879 "name": "raid_bdev1", 00:19:14.879 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:14.879 "strip_size_kb": 0, 00:19:14.879 "state": "online", 00:19:14.879 "raid_level": "raid1", 00:19:14.879 "superblock": true, 00:19:14.879 "num_base_bdevs": 2, 00:19:14.879 "num_base_bdevs_discovered": 2, 00:19:14.879 "num_base_bdevs_operational": 2, 00:19:14.879 "process": { 00:19:14.879 "type": "rebuild", 00:19:14.879 "target": "spare", 00:19:14.879 "progress": { 00:19:14.879 "blocks": 2560, 00:19:14.879 "percent": 32 00:19:14.879 } 00:19:14.879 }, 00:19:14.879 "base_bdevs_list": [ 00:19:14.879 { 00:19:14.879 "name": "spare", 00:19:14.879 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:14.879 "is_configured": true, 00:19:14.879 "data_offset": 256, 00:19:14.879 "data_size": 7936 00:19:14.879 }, 00:19:14.879 { 00:19:14.879 "name": "BaseBdev2", 00:19:14.879 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:14.879 "is_configured": true, 00:19:14.879 "data_offset": 256, 00:19:14.879 "data_size": 7936 00:19:14.879 } 00:19:14.879 ] 00:19:14.879 }' 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.879 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.139 [2024-12-13 08:30:27.297868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.139 [2024-12-13 08:30:27.343986] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:15.139 [2024-12-13 08:30:27.344062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.139 [2024-12-13 08:30:27.344078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.139 [2024-12-13 08:30:27.344087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.139 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.139 "name": "raid_bdev1", 00:19:15.139 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:15.139 "strip_size_kb": 0, 00:19:15.139 "state": "online", 00:19:15.139 "raid_level": "raid1", 00:19:15.139 "superblock": true, 00:19:15.139 "num_base_bdevs": 2, 00:19:15.139 "num_base_bdevs_discovered": 1, 00:19:15.139 "num_base_bdevs_operational": 1, 00:19:15.139 "base_bdevs_list": [ 00:19:15.139 { 00:19:15.139 "name": null, 00:19:15.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.140 "is_configured": false, 00:19:15.140 "data_offset": 0, 00:19:15.140 "data_size": 7936 00:19:15.140 }, 00:19:15.140 { 00:19:15.140 "name": "BaseBdev2", 00:19:15.140 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:15.140 "is_configured": true, 00:19:15.140 "data_offset": 256, 00:19:15.140 "data_size": 7936 00:19:15.140 } 00:19:15.140 ] 00:19:15.140 }' 00:19:15.140 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.140 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.709 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:15.709 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.709 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.709 [2024-12-13 08:30:27.822815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:15.709 [2024-12-13 08:30:27.822958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.709 [2024-12-13 08:30:27.822990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:15.709 [2024-12-13 08:30:27.823001] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.709 [2024-12-13 08:30:27.823225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.709 [2024-12-13 08:30:27.823242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:15.709 [2024-12-13 08:30:27.823317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:15.709 [2024-12-13 08:30:27.823331] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:15.709 [2024-12-13 08:30:27.823342] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:15.709 [2024-12-13 08:30:27.823363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.709 [2024-12-13 08:30:27.839545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:15.709 spare 00:19:15.709 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.709 08:30:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:15.709 [2024-12-13 08:30:27.841415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.649 "name": "raid_bdev1", 00:19:16.649 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:16.649 "strip_size_kb": 0, 00:19:16.649 "state": "online", 00:19:16.649 "raid_level": "raid1", 00:19:16.649 "superblock": true, 00:19:16.649 "num_base_bdevs": 2, 00:19:16.649 "num_base_bdevs_discovered": 2, 00:19:16.649 "num_base_bdevs_operational": 2, 00:19:16.649 "process": { 00:19:16.649 "type": "rebuild", 00:19:16.649 "target": "spare", 00:19:16.649 "progress": { 00:19:16.649 "blocks": 2560, 00:19:16.649 "percent": 32 00:19:16.649 } 00:19:16.649 }, 00:19:16.649 "base_bdevs_list": [ 00:19:16.649 { 00:19:16.649 "name": "spare", 00:19:16.649 "uuid": "9f31a9ea-87c0-578c-bff4-cceca9e9a628", 00:19:16.649 "is_configured": true, 00:19:16.649 "data_offset": 256, 00:19:16.649 "data_size": 7936 00:19:16.649 }, 00:19:16.649 { 00:19:16.649 "name": "BaseBdev2", 00:19:16.649 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:16.649 "is_configured": true, 00:19:16.649 "data_offset": 256, 00:19:16.649 "data_size": 7936 00:19:16.649 } 00:19:16.649 ] 00:19:16.649 }' 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:16.649 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.650 08:30:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.650 [2024-12-13 08:30:28.988924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:16.909 [2024-12-13 08:30:29.047005] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:16.910 [2024-12-13 08:30:29.047071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.910 [2024-12-13 08:30:29.047089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:16.910 [2024-12-13 08:30:29.047096] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.910 "name": "raid_bdev1", 00:19:16.910 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:16.910 "strip_size_kb": 0, 00:19:16.910 "state": "online", 00:19:16.910 "raid_level": "raid1", 00:19:16.910 "superblock": true, 00:19:16.910 "num_base_bdevs": 2, 00:19:16.910 "num_base_bdevs_discovered": 1, 00:19:16.910 "num_base_bdevs_operational": 1, 00:19:16.910 "base_bdevs_list": [ 00:19:16.910 { 00:19:16.910 "name": null, 00:19:16.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.910 "is_configured": false, 00:19:16.910 "data_offset": 0, 00:19:16.910 "data_size": 7936 00:19:16.910 }, 00:19:16.910 { 00:19:16.910 "name": "BaseBdev2", 00:19:16.910 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:16.910 "is_configured": true, 00:19:16.910 "data_offset": 256, 00:19:16.910 "data_size": 7936 00:19:16.910 } 00:19:16.910 ] 00:19:16.910 }' 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.910 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.169 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.169 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.169 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.169 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.169 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.435 "name": "raid_bdev1", 00:19:17.435 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:17.435 "strip_size_kb": 0, 00:19:17.435 "state": "online", 00:19:17.435 "raid_level": "raid1", 00:19:17.435 "superblock": true, 00:19:17.435 "num_base_bdevs": 2, 00:19:17.435 "num_base_bdevs_discovered": 1, 00:19:17.435 "num_base_bdevs_operational": 1, 00:19:17.435 "base_bdevs_list": [ 00:19:17.435 { 00:19:17.435 "name": null, 00:19:17.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.435 "is_configured": false, 00:19:17.435 "data_offset": 0, 00:19:17.435 "data_size": 7936 00:19:17.435 }, 00:19:17.435 { 00:19:17.435 "name": "BaseBdev2", 00:19:17.435 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:17.435 "is_configured": true, 00:19:17.435 "data_offset": 256, 00:19:17.435 "data_size": 7936 00:19:17.435 } 00:19:17.435 ] 00:19:17.435 }' 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.435 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.435 [2024-12-13 08:30:29.694886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:17.435 [2024-12-13 08:30:29.694947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.435 [2024-12-13 08:30:29.694986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:17.435 [2024-12-13 08:30:29.694994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.435 [2024-12-13 08:30:29.695236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.435 [2024-12-13 08:30:29.695278] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:17.436 [2024-12-13 08:30:29.695363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:17.436 [2024-12-13 08:30:29.695399] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.436 [2024-12-13 08:30:29.695473] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:17.436 [2024-12-13 08:30:29.695505] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:17.436 BaseBdev1 00:19:17.436 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.436 08:30:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.383 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.641 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.641 "name": "raid_bdev1", 00:19:18.641 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:18.641 "strip_size_kb": 0, 00:19:18.641 "state": "online", 00:19:18.641 "raid_level": "raid1", 00:19:18.641 "superblock": true, 00:19:18.641 "num_base_bdevs": 2, 00:19:18.641 "num_base_bdevs_discovered": 1, 00:19:18.641 "num_base_bdevs_operational": 1, 00:19:18.641 "base_bdevs_list": [ 00:19:18.641 { 00:19:18.641 "name": null, 00:19:18.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.641 "is_configured": false, 00:19:18.641 "data_offset": 0, 00:19:18.641 "data_size": 7936 00:19:18.641 }, 00:19:18.641 { 00:19:18.641 "name": "BaseBdev2", 00:19:18.641 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:18.641 "is_configured": true, 00:19:18.641 "data_offset": 256, 00:19:18.641 "data_size": 7936 00:19:18.641 } 00:19:18.641 ] 00:19:18.641 }' 00:19:18.641 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.641 08:30:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.900 "name": "raid_bdev1", 00:19:18.900 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:18.900 "strip_size_kb": 0, 00:19:18.900 "state": "online", 00:19:18.900 "raid_level": "raid1", 00:19:18.900 "superblock": true, 00:19:18.900 "num_base_bdevs": 2, 00:19:18.900 "num_base_bdevs_discovered": 1, 00:19:18.900 "num_base_bdevs_operational": 1, 00:19:18.900 "base_bdevs_list": [ 00:19:18.900 { 00:19:18.900 "name": null, 00:19:18.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.900 "is_configured": false, 00:19:18.900 "data_offset": 0, 00:19:18.900 "data_size": 7936 00:19:18.900 }, 00:19:18.900 { 00:19:18.900 "name": "BaseBdev2", 00:19:18.900 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:18.900 "is_configured": true, 00:19:18.900 "data_offset": 256, 00:19:18.900 "data_size": 7936 00:19:18.900 } 00:19:18.900 ] 00:19:18.900 }' 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.900 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.159 [2024-12-13 08:30:31.296198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.159 [2024-12-13 08:30:31.296427] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.159 [2024-12-13 08:30:31.296450] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:19.159 request: 00:19:19.159 { 00:19:19.159 "base_bdev": "BaseBdev1", 00:19:19.159 "raid_bdev": "raid_bdev1", 00:19:19.159 "method": "bdev_raid_add_base_bdev", 00:19:19.159 "req_id": 1 00:19:19.159 } 00:19:19.159 Got JSON-RPC error response 00:19:19.159 response: 00:19:19.159 { 00:19:19.159 "code": -22, 00:19:19.159 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:19.159 } 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.159 08:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.096 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.097 "name": "raid_bdev1", 00:19:20.097 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:20.097 "strip_size_kb": 0, 00:19:20.097 "state": "online", 00:19:20.097 "raid_level": "raid1", 00:19:20.097 "superblock": true, 00:19:20.097 "num_base_bdevs": 2, 00:19:20.097 "num_base_bdevs_discovered": 1, 00:19:20.097 "num_base_bdevs_operational": 1, 00:19:20.097 "base_bdevs_list": [ 00:19:20.097 { 00:19:20.097 "name": null, 00:19:20.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.097 "is_configured": false, 00:19:20.097 "data_offset": 0, 00:19:20.097 "data_size": 7936 00:19:20.097 }, 00:19:20.097 { 00:19:20.097 "name": "BaseBdev2", 00:19:20.097 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:20.097 "is_configured": true, 00:19:20.097 "data_offset": 256, 00:19:20.097 "data_size": 7936 00:19:20.097 } 00:19:20.097 ] 00:19:20.097 }' 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.097 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.666 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.666 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.666 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.666 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.666 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.667 "name": "raid_bdev1", 00:19:20.667 "uuid": "2940997d-e523-4587-9a6f-15505dea2451", 00:19:20.667 "strip_size_kb": 0, 00:19:20.667 "state": "online", 00:19:20.667 "raid_level": "raid1", 00:19:20.667 "superblock": true, 00:19:20.667 "num_base_bdevs": 2, 00:19:20.667 "num_base_bdevs_discovered": 1, 00:19:20.667 "num_base_bdevs_operational": 1, 00:19:20.667 "base_bdevs_list": [ 00:19:20.667 { 00:19:20.667 "name": null, 00:19:20.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.667 "is_configured": false, 00:19:20.667 "data_offset": 0, 00:19:20.667 "data_size": 7936 00:19:20.667 }, 00:19:20.667 { 00:19:20.667 "name": "BaseBdev2", 00:19:20.667 "uuid": "5b4e058f-0885-5514-b634-aaec90a1873f", 00:19:20.667 "is_configured": true, 00:19:20.667 "data_offset": 256, 00:19:20.667 "data_size": 7936 00:19:20.667 } 00:19:20.667 ] 00:19:20.667 }' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89199 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89199 ']' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89199 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89199 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.667 killing process with pid 89199 00:19:20.667 Received shutdown signal, test time was about 60.000000 seconds 00:19:20.667 00:19:20.667 Latency(us) 00:19:20.667 [2024-12-13T08:30:33.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.667 [2024-12-13T08:30:33.032Z] =================================================================================================================== 00:19:20.667 [2024-12-13T08:30:33.032Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89199' 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89199 00:19:20.667 [2024-12-13 08:30:32.948461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:20.667 [2024-12-13 08:30:32.948591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.667 08:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89199 00:19:20.667 [2024-12-13 08:30:32.948640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.667 [2024-12-13 08:30:32.948652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:20.927 [2024-12-13 08:30:33.245198] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.307 08:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:22.307 00:19:22.307 real 0m17.516s 00:19:22.307 user 0m22.896s 00:19:22.307 sys 0m1.624s 00:19:22.307 ************************************ 00:19:22.307 END TEST raid_rebuild_test_sb_md_interleaved 00:19:22.307 ************************************ 00:19:22.307 08:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.307 08:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.307 08:30:34 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:22.307 08:30:34 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:22.307 08:30:34 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89199 ']' 00:19:22.308 08:30:34 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89199 00:19:22.308 08:30:34 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:22.308 ************************************ 00:19:22.308 END TEST bdev_raid 00:19:22.308 ************************************ 00:19:22.308 00:19:22.308 real 12m5.268s 00:19:22.308 user 16m23.858s 00:19:22.308 sys 1m52.921s 00:19:22.308 08:30:34 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.308 08:30:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.308 08:30:34 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:22.308 08:30:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:22.308 08:30:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.308 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:19:22.308 ************************************ 00:19:22.308 START TEST spdkcli_raid 00:19:22.308 ************************************ 00:19:22.308 08:30:34 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:22.308 * Looking for test storage... 00:19:22.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:22.308 08:30:34 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.308 08:30:34 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.308 08:30:34 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.567 08:30:34 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.567 08:30:34 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:22.567 08:30:34 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.567 08:30:34 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.567 --rc genhtml_branch_coverage=1 00:19:22.567 --rc genhtml_function_coverage=1 00:19:22.567 --rc genhtml_legend=1 00:19:22.567 --rc geninfo_all_blocks=1 00:19:22.567 --rc geninfo_unexecuted_blocks=1 00:19:22.567 00:19:22.567 ' 00:19:22.567 08:30:34 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.567 --rc genhtml_branch_coverage=1 00:19:22.567 --rc genhtml_function_coverage=1 00:19:22.568 --rc genhtml_legend=1 00:19:22.568 --rc geninfo_all_blocks=1 00:19:22.568 --rc geninfo_unexecuted_blocks=1 00:19:22.568 00:19:22.568 ' 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.568 --rc genhtml_branch_coverage=1 00:19:22.568 --rc genhtml_function_coverage=1 00:19:22.568 --rc genhtml_legend=1 00:19:22.568 --rc geninfo_all_blocks=1 00:19:22.568 --rc geninfo_unexecuted_blocks=1 00:19:22.568 00:19:22.568 ' 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.568 --rc genhtml_branch_coverage=1 00:19:22.568 --rc genhtml_function_coverage=1 00:19:22.568 --rc genhtml_legend=1 00:19:22.568 --rc geninfo_all_blocks=1 00:19:22.568 --rc geninfo_unexecuted_blocks=1 00:19:22.568 00:19:22.568 ' 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:22.568 08:30:34 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89875 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:22.568 08:30:34 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89875 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89875 ']' 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.568 08:30:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.568 [2024-12-13 08:30:34.847314] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:22.568 [2024-12-13 08:30:34.847512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89875 ] 00:19:22.838 [2024-12-13 08:30:35.019663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:22.838 [2024-12-13 08:30:35.138356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.838 [2024-12-13 08:30:35.138392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.777 08:30:35 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.777 08:30:35 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:23.777 08:30:35 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:23.777 08:30:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.777 08:30:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.777 08:30:36 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:23.777 08:30:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.777 08:30:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.777 08:30:36 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:23.777 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:23.777 ' 00:19:25.678 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:25.678 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:25.678 08:30:37 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:25.678 08:30:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.678 08:30:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.678 08:30:37 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:25.678 08:30:37 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.678 08:30:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.678 08:30:37 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:25.678 ' 00:19:26.613 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:26.613 08:30:38 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:26.613 08:30:38 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.613 08:30:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.613 08:30:38 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:26.613 08:30:38 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.613 08:30:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.613 08:30:38 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:26.613 08:30:38 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:27.178 08:30:39 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:27.178 08:30:39 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:27.178 08:30:39 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:27.178 08:30:39 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.178 08:30:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.435 08:30:39 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:27.435 08:30:39 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.435 08:30:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.435 08:30:39 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:27.435 ' 00:19:28.369 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:28.369 08:30:40 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:28.369 08:30:40 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.369 08:30:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.369 08:30:40 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:28.369 08:30:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.369 08:30:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.369 08:30:40 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:28.369 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:28.369 ' 00:19:29.744 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:29.744 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:30.002 08:30:42 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.002 08:30:42 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89875 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89875 ']' 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89875 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89875 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89875' 00:19:30.002 killing process with pid 89875 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89875 00:19:30.002 08:30:42 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89875 00:19:32.534 08:30:44 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:32.534 08:30:44 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89875 ']' 00:19:32.534 08:30:44 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89875 00:19:32.534 08:30:44 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89875 ']' 00:19:32.534 08:30:44 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89875 00:19:32.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89875) - No such process 00:19:32.534 08:30:44 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89875 is not found' 00:19:32.534 Process with pid 89875 is not found 00:19:32.534 08:30:44 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:32.534 08:30:44 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:32.534 08:30:44 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:32.534 08:30:44 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:32.534 ************************************ 00:19:32.534 END TEST spdkcli_raid 00:19:32.534 ************************************ 00:19:32.534 00:19:32.534 real 0m10.230s 00:19:32.534 user 0m21.085s 00:19:32.534 sys 0m1.154s 00:19:32.534 08:30:44 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.534 08:30:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.534 08:30:44 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:32.534 08:30:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.534 08:30:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.534 08:30:44 -- common/autotest_common.sh@10 -- # set +x 00:19:32.534 ************************************ 00:19:32.534 START TEST blockdev_raid5f 00:19:32.534 ************************************ 00:19:32.534 08:30:44 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:32.794 * Looking for test storage... 00:19:32.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:32.794 08:30:44 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:32.794 08:30:44 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:19:32.794 08:30:44 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:32.794 08:30:44 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.794 08:30:44 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.794 08:30:45 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:32.794 08:30:45 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.794 08:30:45 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:32.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.794 --rc genhtml_branch_coverage=1 00:19:32.794 --rc genhtml_function_coverage=1 00:19:32.794 --rc genhtml_legend=1 00:19:32.794 --rc geninfo_all_blocks=1 00:19:32.794 --rc geninfo_unexecuted_blocks=1 00:19:32.794 00:19:32.794 ' 00:19:32.794 08:30:45 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:32.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.794 --rc genhtml_branch_coverage=1 00:19:32.794 --rc genhtml_function_coverage=1 00:19:32.794 --rc genhtml_legend=1 00:19:32.794 --rc geninfo_all_blocks=1 00:19:32.794 --rc geninfo_unexecuted_blocks=1 00:19:32.794 00:19:32.794 ' 00:19:32.794 08:30:45 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:32.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.794 --rc genhtml_branch_coverage=1 00:19:32.794 --rc genhtml_function_coverage=1 00:19:32.794 --rc genhtml_legend=1 00:19:32.794 --rc geninfo_all_blocks=1 00:19:32.794 --rc geninfo_unexecuted_blocks=1 00:19:32.794 00:19:32.794 ' 00:19:32.794 08:30:45 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:32.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.794 --rc genhtml_branch_coverage=1 00:19:32.794 --rc genhtml_function_coverage=1 00:19:32.794 --rc genhtml_legend=1 00:19:32.794 --rc geninfo_all_blocks=1 00:19:32.795 --rc geninfo_unexecuted_blocks=1 00:19:32.795 00:19:32.795 ' 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90154 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90154 00:19:32.795 08:30:45 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:32.795 08:30:45 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90154 ']' 00:19:32.795 08:30:45 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.795 08:30:45 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.795 08:30:45 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.795 08:30:45 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.795 08:30:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:32.795 [2024-12-13 08:30:45.134175] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:32.795 [2024-12-13 08:30:45.134387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90154 ] 00:19:33.053 [2024-12-13 08:30:45.307494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.311 [2024-12-13 08:30:45.427554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.274 Malloc0 00:19:34.274 Malloc1 00:19:34.274 Malloc2 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.274 08:30:46 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.274 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:34.275 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:34.275 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:34.275 08:30:46 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.275 08:30:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.275 08:30:46 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.275 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:34.275 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3509d1a7-516f-4bb7-80c6-ab6081df5cb3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3509d1a7-516f-4bb7-80c6-ab6081df5cb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3509d1a7-516f-4bb7-80c6-ab6081df5cb3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0f718c94-6fff-4ef5-9ef1-4932315c71c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7dcd9099-8c32-4212-847f-50717a75a53a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e1e6df44-5631-43cf-93a4-06b908189c11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:34.275 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:34.275 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:34.533 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:34.533 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:34.533 08:30:46 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90154 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90154 ']' 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90154 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90154 00:19:34.533 killing process with pid 90154 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90154' 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90154 00:19:34.533 08:30:46 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90154 00:19:37.066 08:30:49 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:37.066 08:30:49 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:37.066 08:30:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:37.066 08:30:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.066 08:30:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.066 ************************************ 00:19:37.066 START TEST bdev_hello_world 00:19:37.066 ************************************ 00:19:37.066 08:30:49 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:37.066 [2024-12-13 08:30:49.367921] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:37.066 [2024-12-13 08:30:49.368053] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90217 ] 00:19:37.325 [2024-12-13 08:30:49.546038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.325 [2024-12-13 08:30:49.661922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.893 [2024-12-13 08:30:50.162629] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:37.893 [2024-12-13 08:30:50.162787] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:37.893 [2024-12-13 08:30:50.162807] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:37.893 [2024-12-13 08:30:50.163324] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:37.893 [2024-12-13 08:30:50.163457] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:37.893 [2024-12-13 08:30:50.163475] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:37.893 [2024-12-13 08:30:50.163528] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:37.893 00:19:37.893 [2024-12-13 08:30:50.163546] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:39.269 00:19:39.269 real 0m2.249s 00:19:39.269 user 0m1.874s 00:19:39.269 sys 0m0.257s 00:19:39.269 ************************************ 00:19:39.269 END TEST bdev_hello_world 00:19:39.269 ************************************ 00:19:39.269 08:30:51 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.269 08:30:51 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:39.269 08:30:51 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:39.269 08:30:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.269 08:30:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.269 08:30:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.269 ************************************ 00:19:39.269 START TEST bdev_bounds 00:19:39.269 ************************************ 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90259 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90259' 00:19:39.269 Process bdevio pid: 90259 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90259 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90259 ']' 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.269 08:30:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:39.528 [2024-12-13 08:30:51.686246] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:39.528 [2024-12-13 08:30:51.686467] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90259 ] 00:19:39.528 [2024-12-13 08:30:51.866177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.787 [2024-12-13 08:30:51.986944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.787 [2024-12-13 08:30:51.987023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.787 [2024-12-13 08:30:51.987028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.354 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.354 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:40.354 08:30:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:40.354 I/O targets: 00:19:40.354 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:40.354 00:19:40.354 00:19:40.354 CUnit - A unit testing framework for C - Version 2.1-3 00:19:40.354 http://cunit.sourceforge.net/ 00:19:40.354 00:19:40.354 00:19:40.354 Suite: bdevio tests on: raid5f 00:19:40.354 Test: blockdev write read block ...passed 00:19:40.354 Test: blockdev write zeroes read block ...passed 00:19:40.354 Test: blockdev write zeroes read no split ...passed 00:19:40.612 Test: blockdev write zeroes read split ...passed 00:19:40.612 Test: blockdev write zeroes read split partial ...passed 00:19:40.612 Test: blockdev reset ...passed 00:19:40.612 Test: blockdev write read 8 blocks ...passed 00:19:40.612 Test: blockdev write read size > 128k ...passed 00:19:40.612 Test: blockdev write read invalid size ...passed 00:19:40.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:40.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:40.613 Test: blockdev write read max offset ...passed 00:19:40.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:40.613 Test: blockdev writev readv 8 blocks ...passed 00:19:40.613 Test: blockdev writev readv 30 x 1block ...passed 00:19:40.613 Test: blockdev writev readv block ...passed 00:19:40.613 Test: blockdev writev readv size > 128k ...passed 00:19:40.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:40.613 Test: blockdev comparev and writev ...passed 00:19:40.613 Test: blockdev nvme passthru rw ...passed 00:19:40.613 Test: blockdev nvme passthru vendor specific ...passed 00:19:40.613 Test: blockdev nvme admin passthru ...passed 00:19:40.613 Test: blockdev copy ...passed 00:19:40.613 00:19:40.613 Run Summary: Type Total Ran Passed Failed Inactive 00:19:40.613 suites 1 1 n/a 0 0 00:19:40.613 tests 23 23 23 0 0 00:19:40.613 asserts 130 130 130 0 n/a 00:19:40.613 00:19:40.613 Elapsed time = 0.566 seconds 00:19:40.613 0 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90259 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90259 ']' 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90259 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90259 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90259' 00:19:40.613 killing process with pid 90259 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90259 00:19:40.613 08:30:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90259 00:19:41.988 08:30:54 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:41.988 00:19:41.988 real 0m2.748s 00:19:41.988 user 0m6.813s 00:19:41.988 sys 0m0.399s 00:19:41.988 08:30:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.988 08:30:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:41.988 ************************************ 00:19:41.988 END TEST bdev_bounds 00:19:41.988 ************************************ 00:19:42.247 08:30:54 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:42.247 08:30:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:42.247 08:30:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.247 08:30:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:42.247 ************************************ 00:19:42.247 START TEST bdev_nbd 00:19:42.247 ************************************ 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90324 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:42.247 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:42.248 08:30:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90324 /var/tmp/spdk-nbd.sock 00:19:42.248 08:30:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90324 ']' 00:19:42.248 08:30:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:42.248 08:30:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.248 08:30:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:42.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:42.248 08:30:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.248 08:30:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:42.248 [2024-12-13 08:30:54.506386] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:42.248 [2024-12-13 08:30:54.506601] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.506 [2024-12-13 08:30:54.682701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.506 [2024-12-13 08:30:54.791674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:43.073 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.332 1+0 records in 00:19:43.332 1+0 records out 00:19:43.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388729 s, 10.5 MB/s 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:43.332 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:43.590 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:43.590 { 00:19:43.590 "nbd_device": "/dev/nbd0", 00:19:43.590 "bdev_name": "raid5f" 00:19:43.590 } 00:19:43.590 ]' 00:19:43.590 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:43.590 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:43.590 { 00:19:43.590 "nbd_device": "/dev/nbd0", 00:19:43.590 "bdev_name": "raid5f" 00:19:43.590 } 00:19:43.590 ]' 00:19:43.590 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:43.591 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:43.591 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:43.591 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:43.591 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:43.591 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:43.591 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.591 08:30:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:43.849 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.107 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:44.108 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.108 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:44.108 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.108 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.108 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:44.366 /dev/nbd0 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.366 1+0 records in 00:19:44.366 1+0 records out 00:19:44.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528375 s, 7.8 MB/s 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.366 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:44.625 { 00:19:44.625 "nbd_device": "/dev/nbd0", 00:19:44.625 "bdev_name": "raid5f" 00:19:44.625 } 00:19:44.625 ]' 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:44.625 { 00:19:44.625 "nbd_device": "/dev/nbd0", 00:19:44.625 "bdev_name": "raid5f" 00:19:44.625 } 00:19:44.625 ]' 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:44.625 256+0 records in 00:19:44.625 256+0 records out 00:19:44.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128254 s, 81.8 MB/s 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:44.625 256+0 records in 00:19:44.625 256+0 records out 00:19:44.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302227 s, 34.7 MB/s 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.625 08:30:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.884 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:45.143 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:45.402 malloc_lvol_verify 00:19:45.402 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:45.660 548e2cea-ed0b-4ef4-960e-f797c78a59d5 00:19:45.660 08:30:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:45.919 9bb8a93e-458b-427b-bd19-3439a769c07b 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:45.919 /dev/nbd0 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:45.919 mke2fs 1.47.0 (5-Feb-2023) 00:19:45.919 Discarding device blocks: 0/4096 done 00:19:45.919 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:45.919 00:19:45.919 Allocating group tables: 0/1 done 00:19:45.919 Writing inode tables: 0/1 done 00:19:45.919 Creating journal (1024 blocks): done 00:19:45.919 Writing superblocks and filesystem accounting information: 0/1 done 00:19:45.919 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.919 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90324 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90324 ']' 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90324 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90324 00:19:46.178 killing process with pid 90324 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90324' 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90324 00:19:46.178 08:30:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90324 00:19:48.081 08:30:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:48.081 00:19:48.081 real 0m5.535s 00:19:48.081 user 0m7.525s 00:19:48.081 sys 0m1.274s 00:19:48.081 08:30:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.081 08:30:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:48.081 ************************************ 00:19:48.081 END TEST bdev_nbd 00:19:48.081 ************************************ 00:19:48.082 08:30:59 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:48.082 08:30:59 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:48.082 08:30:59 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:48.082 08:30:59 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:48.082 08:30:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:48.082 08:30:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.082 08:31:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:48.082 ************************************ 00:19:48.082 START TEST bdev_fio 00:19:48.082 ************************************ 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:48.082 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:48.082 ************************************ 00:19:48.082 START TEST bdev_fio_rw_verify 00:19:48.082 ************************************ 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:48.082 08:31:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:48.082 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:48.082 fio-3.35 00:19:48.082 Starting 1 thread 00:20:00.294 00:20:00.294 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90524: Fri Dec 13 08:31:11 2024 00:20:00.294 read: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(440MiB/10001msec) 00:20:00.294 slat (usec): min=18, max=418, avg=21.61, stdev= 3.45 00:20:00.294 clat (usec): min=10, max=1232, avg=141.83, stdev=53.98 00:20:00.294 lat (usec): min=30, max=1261, avg=163.45, stdev=54.93 00:20:00.294 clat percentiles (usec): 00:20:00.294 | 50.000th=[ 141], 99.000th=[ 255], 99.900th=[ 351], 99.990th=[ 873], 00:20:00.294 | 99.999th=[ 1205] 00:20:00.294 write: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(458MiB/9883msec); 0 zone resets 00:20:00.294 slat (usec): min=7, max=267, avg=17.78, stdev= 4.40 00:20:00.294 clat (usec): min=57, max=1664, avg=322.72, stdev=58.55 00:20:00.294 lat (usec): min=73, max=1931, avg=340.50, stdev=60.63 00:20:00.294 clat percentiles (usec): 00:20:00.294 | 50.000th=[ 326], 99.000th=[ 445], 99.900th=[ 947], 99.990th=[ 1532], 00:20:00.294 | 99.999th=[ 1631] 00:20:00.294 bw ( KiB/s): min=42784, max=50344, per=99.37%, avg=47138.53, stdev=2272.26, samples=19 00:20:00.294 iops : min=10696, max=12586, avg=11784.63, stdev=568.06, samples=19 00:20:00.294 lat (usec) : 20=0.01%, 50=0.01%, 100=13.07%, 250=38.76%, 500=47.91% 00:20:00.294 lat (usec) : 750=0.15%, 1000=0.06% 00:20:00.294 lat (msec) : 2=0.04% 00:20:00.294 cpu : usr=98.98%, sys=0.42%, ctx=26, majf=0, minf=9381 00:20:00.294 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.294 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.294 issued rwts: total=112652,117210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.294 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:00.294 00:20:00.294 Run status group 0 (all jobs): 00:20:00.294 READ: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=440MiB (461MB), run=10001-10001msec 00:20:00.294 WRITE: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=458MiB (480MB), run=9883-9883msec 00:20:00.554 ----------------------------------------------------- 00:20:00.554 Suppressions used: 00:20:00.554 count bytes template 00:20:00.554 1 7 /usr/src/fio/parse.c 00:20:00.554 1032 99072 /usr/src/fio/iolog.c 00:20:00.554 1 8 libtcmalloc_minimal.so 00:20:00.554 1 904 libcrypto.so 00:20:00.554 ----------------------------------------------------- 00:20:00.554 00:20:00.554 00:20:00.554 real 0m12.760s 00:20:00.554 user 0m12.951s 00:20:00.554 sys 0m0.651s 00:20:00.554 ************************************ 00:20:00.554 END TEST bdev_fio_rw_verify 00:20:00.554 ************************************ 00:20:00.554 08:31:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.554 08:31:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:00.818 08:31:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:00.818 08:31:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:00.818 08:31:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:00.818 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3509d1a7-516f-4bb7-80c6-ab6081df5cb3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3509d1a7-516f-4bb7-80c6-ab6081df5cb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3509d1a7-516f-4bb7-80c6-ab6081df5cb3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0f718c94-6fff-4ef5-9ef1-4932315c71c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7dcd9099-8c32-4212-847f-50717a75a53a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e1e6df44-5631-43cf-93a4-06b908189c11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:00.819 08:31:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:00.819 08:31:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:00.819 08:31:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:00.819 /home/vagrant/spdk_repo/spdk 00:20:00.819 08:31:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:00.819 08:31:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:00.819 08:31:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:00.819 00:20:00.819 real 0m13.017s 00:20:00.819 user 0m13.069s 00:20:00.819 sys 0m0.769s 00:20:00.819 ************************************ 00:20:00.819 END TEST bdev_fio 00:20:00.819 ************************************ 00:20:00.819 08:31:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.819 08:31:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:00.819 08:31:13 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:00.819 08:31:13 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:00.819 08:31:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:00.819 08:31:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.819 08:31:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:00.819 ************************************ 00:20:00.819 START TEST bdev_verify 00:20:00.819 ************************************ 00:20:00.819 08:31:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:01.092 [2024-12-13 08:31:13.190729] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:01.092 [2024-12-13 08:31:13.190933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90683 ] 00:20:01.092 [2024-12-13 08:31:13.370784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:01.361 [2024-12-13 08:31:13.487367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.361 [2024-12-13 08:31:13.487401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.927 Running I/O for 5 seconds... 00:20:03.796 13104.00 IOPS, 51.19 MiB/s [2024-12-13T08:31:17.097Z] 13784.00 IOPS, 53.84 MiB/s [2024-12-13T08:31:18.033Z] 14161.33 IOPS, 55.32 MiB/s [2024-12-13T08:31:19.412Z] 14463.00 IOPS, 56.50 MiB/s [2024-12-13T08:31:19.412Z] 14866.80 IOPS, 58.07 MiB/s 00:20:07.047 Latency(us) 00:20:07.047 [2024-12-13T08:31:19.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.047 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:07.047 Verification LBA range: start 0x0 length 0x2000 00:20:07.047 raid5f : 5.02 7428.17 29.02 0.00 0.00 25875.08 117.16 24153.88 00:20:07.047 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:07.047 Verification LBA range: start 0x2000 length 0x2000 00:20:07.047 raid5f : 5.01 7418.41 28.98 0.00 0.00 25967.54 259.35 24153.88 00:20:07.047 [2024-12-13T08:31:19.412Z] =================================================================================================================== 00:20:07.047 [2024-12-13T08:31:19.412Z] Total : 14846.57 57.99 0.00 0.00 25921.25 117.16 24153.88 00:20:08.428 00:20:08.428 real 0m7.455s 00:20:08.428 user 0m13.767s 00:20:08.428 sys 0m0.277s 00:20:08.428 08:31:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.428 08:31:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:08.428 ************************************ 00:20:08.428 END TEST bdev_verify 00:20:08.428 ************************************ 00:20:08.428 08:31:20 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:08.428 08:31:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:08.428 08:31:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.428 08:31:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:08.428 ************************************ 00:20:08.428 START TEST bdev_verify_big_io 00:20:08.428 ************************************ 00:20:08.428 08:31:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:08.428 [2024-12-13 08:31:20.700943] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:08.428 [2024-12-13 08:31:20.701149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90783 ] 00:20:08.739 [2024-12-13 08:31:20.875032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:08.739 [2024-12-13 08:31:21.015604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.739 [2024-12-13 08:31:21.015638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.309 Running I/O for 5 seconds... 00:20:11.635 758.00 IOPS, 47.38 MiB/s [2024-12-13T08:31:24.948Z] 792.00 IOPS, 49.50 MiB/s [2024-12-13T08:31:25.888Z] 782.00 IOPS, 48.88 MiB/s [2024-12-13T08:31:26.825Z] 808.25 IOPS, 50.52 MiB/s [2024-12-13T08:31:27.083Z] 812.40 IOPS, 50.77 MiB/s 00:20:14.718 Latency(us) 00:20:14.718 [2024-12-13T08:31:27.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.718 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:14.718 Verification LBA range: start 0x0 length 0x200 00:20:14.718 raid5f : 5.28 396.72 24.79 0.00 0.00 7903509.19 138.62 355325.32 00:20:14.718 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:14.718 Verification LBA range: start 0x200 length 0x200 00:20:14.718 raid5f : 5.28 396.76 24.80 0.00 0.00 7876912.20 243.26 355325.32 00:20:14.718 [2024-12-13T08:31:27.083Z] =================================================================================================================== 00:20:14.718 [2024-12-13T08:31:27.083Z] Total : 793.48 49.59 0.00 0.00 7890210.70 138.62 355325.32 00:20:16.098 00:20:16.098 real 0m7.735s 00:20:16.098 user 0m14.313s 00:20:16.098 sys 0m0.314s 00:20:16.098 08:31:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.098 ************************************ 00:20:16.098 END TEST bdev_verify_big_io 00:20:16.098 ************************************ 00:20:16.098 08:31:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.098 08:31:28 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:16.098 08:31:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:16.098 08:31:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.098 08:31:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:16.098 ************************************ 00:20:16.098 START TEST bdev_write_zeroes 00:20:16.098 ************************************ 00:20:16.098 08:31:28 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:16.358 [2024-12-13 08:31:28.512796] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:16.358 [2024-12-13 08:31:28.513031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90882 ] 00:20:16.358 [2024-12-13 08:31:28.689952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.617 [2024-12-13 08:31:28.809004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.186 Running I/O for 1 seconds... 00:20:18.123 24975.00 IOPS, 97.56 MiB/s 00:20:18.123 Latency(us) 00:20:18.123 [2024-12-13T08:31:30.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.123 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:18.123 raid5f : 1.01 24956.60 97.49 0.00 0.00 5113.19 1380.83 7898.66 00:20:18.123 [2024-12-13T08:31:30.488Z] =================================================================================================================== 00:20:18.123 [2024-12-13T08:31:30.488Z] Total : 24956.60 97.49 0.00 0.00 5113.19 1380.83 7898.66 00:20:19.512 00:20:19.513 real 0m3.285s 00:20:19.513 user 0m2.911s 00:20:19.513 sys 0m0.245s 00:20:19.513 ************************************ 00:20:19.513 END TEST bdev_write_zeroes 00:20:19.513 ************************************ 00:20:19.513 08:31:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.513 08:31:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:19.513 08:31:31 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:19.513 08:31:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:19.513 08:31:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.513 08:31:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:19.513 ************************************ 00:20:19.513 START TEST bdev_json_nonenclosed 00:20:19.513 ************************************ 00:20:19.513 08:31:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:19.513 [2024-12-13 08:31:31.866337] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:19.513 [2024-12-13 08:31:31.866524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90937 ] 00:20:19.771 [2024-12-13 08:31:32.039762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.031 [2024-12-13 08:31:32.146734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.031 [2024-12-13 08:31:32.146924] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:20.031 [2024-12-13 08:31:32.146956] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:20.031 [2024-12-13 08:31:32.146967] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:20.031 00:20:20.031 real 0m0.599s 00:20:20.031 user 0m0.373s 00:20:20.031 sys 0m0.121s 00:20:20.031 ************************************ 00:20:20.031 END TEST bdev_json_nonenclosed 00:20:20.031 ************************************ 00:20:20.031 08:31:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.031 08:31:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:20.290 08:31:32 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:20.290 08:31:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:20.290 08:31:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.290 08:31:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:20.290 ************************************ 00:20:20.290 START TEST bdev_json_nonarray 00:20:20.290 ************************************ 00:20:20.290 08:31:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:20.290 [2024-12-13 08:31:32.540848] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:20.290 [2024-12-13 08:31:32.540975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90967 ] 00:20:20.549 [2024-12-13 08:31:32.719988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.549 [2024-12-13 08:31:32.828305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.549 [2024-12-13 08:31:32.828508] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:20.549 [2024-12-13 08:31:32.828529] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:20.549 [2024-12-13 08:31:32.828549] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:20.808 00:20:20.808 real 0m0.613s 00:20:20.808 user 0m0.377s 00:20:20.808 sys 0m0.131s 00:20:20.808 ************************************ 00:20:20.809 END TEST bdev_json_nonarray 00:20:20.809 ************************************ 00:20:20.809 08:31:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.809 08:31:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:20.809 08:31:33 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:20.809 ************************************ 00:20:20.809 END TEST blockdev_raid5f 00:20:20.809 ************************************ 00:20:20.809 00:20:20.809 real 0m48.344s 00:20:20.809 user 1m5.623s 00:20:20.809 sys 0m4.879s 00:20:20.809 08:31:33 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.809 08:31:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:21.068 08:31:33 -- spdk/autotest.sh@194 -- # uname -s 00:20:21.068 08:31:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:21.068 08:31:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:21.068 08:31:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:21.068 08:31:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:21.068 08:31:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:21.068 08:31:33 -- common/autotest_common.sh@10 -- # set +x 00:20:21.068 08:31:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:21.068 08:31:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:21.068 08:31:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:21.068 08:31:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:21.068 08:31:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:21.068 08:31:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:21.068 08:31:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:21.068 08:31:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.068 08:31:33 -- common/autotest_common.sh@10 -- # set +x 00:20:21.068 08:31:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:21.068 08:31:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:21.068 08:31:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:21.068 08:31:33 -- common/autotest_common.sh@10 -- # set +x 00:20:23.605 INFO: APP EXITING 00:20:23.605 INFO: killing all VMs 00:20:23.605 INFO: killing vhost app 00:20:23.605 INFO: EXIT DONE 00:20:23.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.605 Waiting for block devices as requested 00:20:23.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:23.864 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:24.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:24.816 Cleaning 00:20:24.816 Removing: /var/run/dpdk/spdk0/config 00:20:24.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:24.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:24.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:24.816 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:24.816 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:24.816 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:24.816 Removing: /dev/shm/spdk_tgt_trace.pid57002 00:20:24.816 Removing: /var/run/dpdk/spdk0 00:20:24.816 Removing: /var/run/dpdk/spdk_pid56761 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57002 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57231 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57346 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57402 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57541 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57559 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57775 00:20:24.816 Removing: /var/run/dpdk/spdk_pid57893 00:20:24.816 Removing: /var/run/dpdk/spdk_pid58005 00:20:24.816 Removing: /var/run/dpdk/spdk_pid58133 00:20:24.816 Removing: /var/run/dpdk/spdk_pid58241 00:20:24.816 Removing: /var/run/dpdk/spdk_pid58286 00:20:24.816 Removing: /var/run/dpdk/spdk_pid58317 00:20:24.817 Removing: /var/run/dpdk/spdk_pid58393 00:20:24.817 Removing: /var/run/dpdk/spdk_pid58521 00:20:24.817 Removing: /var/run/dpdk/spdk_pid58976 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59051 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59130 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59152 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59303 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59325 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59475 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59496 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59566 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59584 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59648 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59677 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59872 00:20:24.817 Removing: /var/run/dpdk/spdk_pid59916 00:20:24.817 Removing: /var/run/dpdk/spdk_pid60005 00:20:24.817 Removing: /var/run/dpdk/spdk_pid61358 00:20:24.817 Removing: /var/run/dpdk/spdk_pid61570 00:20:24.817 Removing: /var/run/dpdk/spdk_pid61710 00:20:24.817 Removing: /var/run/dpdk/spdk_pid62359 00:20:24.817 Removing: /var/run/dpdk/spdk_pid62565 00:20:24.817 Removing: /var/run/dpdk/spdk_pid62705 00:20:24.817 Removing: /var/run/dpdk/spdk_pid63354 00:20:25.075 Removing: /var/run/dpdk/spdk_pid63684 00:20:25.075 Removing: /var/run/dpdk/spdk_pid63824 00:20:25.075 Removing: /var/run/dpdk/spdk_pid65220 00:20:25.075 Removing: /var/run/dpdk/spdk_pid65481 00:20:25.075 Removing: /var/run/dpdk/spdk_pid65621 00:20:25.075 Removing: /var/run/dpdk/spdk_pid67012 00:20:25.076 Removing: /var/run/dpdk/spdk_pid67265 00:20:25.076 Removing: /var/run/dpdk/spdk_pid67416 00:20:25.076 Removing: /var/run/dpdk/spdk_pid68801 00:20:25.076 Removing: /var/run/dpdk/spdk_pid69247 00:20:25.076 Removing: /var/run/dpdk/spdk_pid69394 00:20:25.076 Removing: /var/run/dpdk/spdk_pid70890 00:20:25.076 Removing: /var/run/dpdk/spdk_pid71162 00:20:25.076 Removing: /var/run/dpdk/spdk_pid71308 00:20:25.076 Removing: /var/run/dpdk/spdk_pid72807 00:20:25.076 Removing: /var/run/dpdk/spdk_pid73072 00:20:25.076 Removing: /var/run/dpdk/spdk_pid73218 00:20:25.076 Removing: /var/run/dpdk/spdk_pid74713 00:20:25.076 Removing: /var/run/dpdk/spdk_pid75200 00:20:25.076 Removing: /var/run/dpdk/spdk_pid75350 00:20:25.076 Removing: /var/run/dpdk/spdk_pid75495 00:20:25.076 Removing: /var/run/dpdk/spdk_pid75919 00:20:25.076 Removing: /var/run/dpdk/spdk_pid76643 00:20:25.076 Removing: /var/run/dpdk/spdk_pid77024 00:20:25.076 Removing: /var/run/dpdk/spdk_pid77715 00:20:25.076 Removing: /var/run/dpdk/spdk_pid78158 00:20:25.076 Removing: /var/run/dpdk/spdk_pid78919 00:20:25.076 Removing: /var/run/dpdk/spdk_pid79331 00:20:25.076 Removing: /var/run/dpdk/spdk_pid81312 00:20:25.076 Removing: /var/run/dpdk/spdk_pid81750 00:20:25.076 Removing: /var/run/dpdk/spdk_pid82201 00:20:25.076 Removing: /var/run/dpdk/spdk_pid84298 00:20:25.076 Removing: /var/run/dpdk/spdk_pid84778 00:20:25.076 Removing: /var/run/dpdk/spdk_pid85283 00:20:25.076 Removing: /var/run/dpdk/spdk_pid86346 00:20:25.076 Removing: /var/run/dpdk/spdk_pid86669 00:20:25.076 Removing: /var/run/dpdk/spdk_pid87608 00:20:25.076 Removing: /var/run/dpdk/spdk_pid87932 00:20:25.076 Removing: /var/run/dpdk/spdk_pid88876 00:20:25.076 Removing: /var/run/dpdk/spdk_pid89199 00:20:25.076 Removing: /var/run/dpdk/spdk_pid89875 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90154 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90217 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90259 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90510 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90683 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90783 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90882 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90937 00:20:25.076 Removing: /var/run/dpdk/spdk_pid90967 00:20:25.076 Clean 00:20:25.334 08:31:37 -- common/autotest_common.sh@1453 -- # return 0 00:20:25.334 08:31:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:25.334 08:31:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.334 08:31:37 -- common/autotest_common.sh@10 -- # set +x 00:20:25.334 08:31:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:25.334 08:31:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.334 08:31:37 -- common/autotest_common.sh@10 -- # set +x 00:20:25.334 08:31:37 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:25.334 08:31:37 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:25.334 08:31:37 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:25.334 08:31:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:25.334 08:31:37 -- spdk/autotest.sh@398 -- # hostname 00:20:25.334 08:31:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:25.594 geninfo: WARNING: invalid characters removed from testname! 00:20:47.556 08:31:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:48.933 08:32:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:50.838 08:32:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:52.755 08:32:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:54.667 08:32:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:56.570 08:32:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:58.475 08:32:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:58.475 08:32:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:58.475 08:32:10 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:58.475 08:32:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:58.475 08:32:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:58.475 08:32:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:58.475 + [[ -n 5433 ]] 00:20:58.475 + sudo kill 5433 00:20:58.485 [Pipeline] } 00:20:58.500 [Pipeline] // timeout 00:20:58.506 [Pipeline] } 00:20:58.522 [Pipeline] // stage 00:20:58.528 [Pipeline] } 00:20:58.543 [Pipeline] // catchError 00:20:58.553 [Pipeline] stage 00:20:58.556 [Pipeline] { (Stop VM) 00:20:58.569 [Pipeline] sh 00:20:58.852 + vagrant halt 00:21:01.480 ==> default: Halting domain... 00:21:08.065 [Pipeline] sh 00:21:08.349 + vagrant destroy -f 00:21:10.891 ==> default: Removing domain... 00:21:10.905 [Pipeline] sh 00:21:11.190 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:11.201 [Pipeline] } 00:21:11.217 [Pipeline] // stage 00:21:11.222 [Pipeline] } 00:21:11.237 [Pipeline] // dir 00:21:11.242 [Pipeline] } 00:21:11.257 [Pipeline] // wrap 00:21:11.263 [Pipeline] } 00:21:11.275 [Pipeline] // catchError 00:21:11.285 [Pipeline] stage 00:21:11.287 [Pipeline] { (Epilogue) 00:21:11.303 [Pipeline] sh 00:21:11.588 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:15.801 [Pipeline] catchError 00:21:15.803 [Pipeline] { 00:21:15.816 [Pipeline] sh 00:21:16.121 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:16.121 Artifacts sizes are good 00:21:16.154 [Pipeline] } 00:21:16.166 [Pipeline] // catchError 00:21:16.175 [Pipeline] archiveArtifacts 00:21:16.182 Archiving artifacts 00:21:16.280 [Pipeline] cleanWs 00:21:16.290 [WS-CLEANUP] Deleting project workspace... 00:21:16.290 [WS-CLEANUP] Deferred wipeout is used... 00:21:16.297 [WS-CLEANUP] done 00:21:16.299 [Pipeline] } 00:21:16.312 [Pipeline] // stage 00:21:16.317 [Pipeline] } 00:21:16.330 [Pipeline] // node 00:21:16.335 [Pipeline] End of Pipeline 00:21:16.370 Finished: SUCCESS